Test Report: Docker_Linux_crio_arm64 22414

                    
                      7225a17c4161ad48c671012cf8528dba752659f9:2026-01-10:43179
                    
                

Test fail (27/332)

x
+
TestAddons/serial/Volcano (0.8s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-106930 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-106930 addons disable volcano --alsologtostderr -v=1: exit status 11 (798.292243ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:56:00.593828   11030 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:56:00.595781   11030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:00.595819   11030 out.go:374] Setting ErrFile to fd 2...
	I0110 01:56:00.595827   11030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:00.596122   11030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 01:56:00.596493   11030 mustload.go:66] Loading cluster: addons-106930
	I0110 01:56:00.596879   11030 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:00.596902   11030 addons.go:622] checking whether the cluster is paused
	I0110 01:56:00.597011   11030 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:00.597026   11030 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:56:00.597606   11030 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:56:00.629999   11030 ssh_runner.go:195] Run: systemctl --version
	I0110 01:56:00.630059   11030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:56:00.647663   11030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:56:00.754417   11030 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:56:00.754561   11030 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:56:00.800551   11030 cri.go:96] found id: "5dafbb5913003c3cb4147e90f1dde30311fabfa04e53c2fe9e8b29032265715e"
	I0110 01:56:00.800573   11030 cri.go:96] found id: "54899c00ceb5888bd4e38adcc09f61d6f4613e853ecdd6b32e4c5bb99d581e72"
	I0110 01:56:00.800579   11030 cri.go:96] found id: "6b948bf13d65c047739bccae4ed739b9f7fa926df29d62f31737a93200f79453"
	I0110 01:56:00.800583   11030 cri.go:96] found id: "7eebd1b695c51c83b430b93742ee73b2ca9993855903b7e6f7ae9d85c8d2fc50"
	I0110 01:56:00.800600   11030 cri.go:96] found id: "d95a18b705959f30d3471889d63995aa4063cde1d6299c4b2f1c667a21a75efd"
	I0110 01:56:00.800605   11030 cri.go:96] found id: "a458547631bdc917e613986d8f7ac6fd7565359fe88471bbce824f1874e6ec32"
	I0110 01:56:00.800618   11030 cri.go:96] found id: "abd9f763f3604b1b1ed81f007194c72054c1a43886832027733e3a1212a8a3b9"
	I0110 01:56:00.800631   11030 cri.go:96] found id: "e27ae252f277861a588db1ed9829c8aa537b82b2413c3d13c83bffbb69308234"
	I0110 01:56:00.800636   11030 cri.go:96] found id: "a9693cc77cf0c484c4c27fa1cdee8533ce5f5d13d0d4db3bbb53a00565b6f118"
	I0110 01:56:00.800658   11030 cri.go:96] found id: "2c8af457d0f780608fcdfa1b8cb701ac07004f54e4c6aa0a8e3274b148b5e61b"
	I0110 01:56:00.800662   11030 cri.go:96] found id: "dbe5d7a996f35eb0c94f70151b1bf92153b3fec733f1ecc9a041cb88ee462f6a"
	I0110 01:56:00.800665   11030 cri.go:96] found id: "a23567b0b791f5de297c9aef90ff8140157bc0649af00ce0e5e30e85bd9ad5ab"
	I0110 01:56:00.800668   11030 cri.go:96] found id: "8bce067de04e9ddb264d6b93bf83726a56f1d31349205b41d7b0321df22fc8d2"
	I0110 01:56:00.800671   11030 cri.go:96] found id: "245c82556c09186d3976774f81d9cd646157daeb5a2ffd046ba607941618606e"
	I0110 01:56:00.800674   11030 cri.go:96] found id: "b6868c2a5d4ce10096791acf437ee992cfdbd960bd8a366942e1351e6d349ca0"
	I0110 01:56:00.800680   11030 cri.go:96] found id: "3e0c2863ef6c21cbabb9e19c9f8f04a86e18a362494488884d4f1773f7580030"
	I0110 01:56:00.800683   11030 cri.go:96] found id: "27448255d1e1c54f8723ee7cca1526b639c1aebd34bdb2d75396a1ac1817885e"
	I0110 01:56:00.800686   11030 cri.go:96] found id: "6f32539141278af9c24103ee0c0a5e69ce237dbbe3695b92b0c5bf3956b30002"
	I0110 01:56:00.800690   11030 cri.go:96] found id: "5a226148b0ac79fc9d466c6da91dac92efac9abb8d69f4c01f8ce6b764e6b7bc"
	I0110 01:56:00.800699   11030 cri.go:96] found id: "014b6f8800d7d2127c3f3e57c97f5ed5d5a2361c80253dc74bd9f42bf604e9c2"
	I0110 01:56:00.800704   11030 cri.go:96] found id: "c951c20416799b3f43a759d9a52dedde3ee480d19bc9e255bb3cc1c62a5a8d25"
	I0110 01:56:00.800708   11030 cri.go:96] found id: "1c96ddd0e76bb0b8b444708102feeda0babec3472e7b33767118eb64739b9136"
	I0110 01:56:00.800711   11030 cri.go:96] found id: "80a58bbf677ab4ba0d96a2e9411138aa345db6101e0e1af2e9ea8754ffd39109"
	I0110 01:56:00.800714   11030 cri.go:96] found id: ""
	I0110 01:56:00.800767   11030 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:56:00.820727   11030 out.go:203] 
	W0110 01:56:00.823632   11030 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:56:00.823665   11030 out.go:285] * 
	* 
	W0110 01:56:01.302977   11030 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:56:01.305947   11030 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-106930 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.80s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 4.775427ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-rcs59" [375226f7-84d3-439b-9562-be87a865abbe] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003940465s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-h9l6w" [7d1bade3-9c33-4933-a888-ed07ffed5bfb] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00396127s
addons_test.go:394: (dbg) Run:  kubectl --context addons-106930 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-106930 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-106930 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.061411187s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-106930 ip
2026/01/10 01:56:27 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-106930 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-106930 addons disable registry --alsologtostderr -v=1: exit status 11 (258.419922ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:56:27.096160   11618 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:56:27.096434   11618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:27.096468   11618 out.go:374] Setting ErrFile to fd 2...
	I0110 01:56:27.096488   11618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:27.096763   11618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 01:56:27.097084   11618 mustload.go:66] Loading cluster: addons-106930
	I0110 01:56:27.097482   11618 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:27.097529   11618 addons.go:622] checking whether the cluster is paused
	I0110 01:56:27.097662   11618 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:27.097698   11618 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:56:27.098264   11618 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:56:27.115581   11618 ssh_runner.go:195] Run: systemctl --version
	I0110 01:56:27.115646   11618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:56:27.137144   11618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:56:27.242210   11618 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:56:27.242297   11618 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:56:27.272002   11618 cri.go:96] found id: "5dafbb5913003c3cb4147e90f1dde30311fabfa04e53c2fe9e8b29032265715e"
	I0110 01:56:27.272024   11618 cri.go:96] found id: "54899c00ceb5888bd4e38adcc09f61d6f4613e853ecdd6b32e4c5bb99d581e72"
	I0110 01:56:27.272029   11618 cri.go:96] found id: "6b948bf13d65c047739bccae4ed739b9f7fa926df29d62f31737a93200f79453"
	I0110 01:56:27.272033   11618 cri.go:96] found id: "7eebd1b695c51c83b430b93742ee73b2ca9993855903b7e6f7ae9d85c8d2fc50"
	I0110 01:56:27.272037   11618 cri.go:96] found id: "d95a18b705959f30d3471889d63995aa4063cde1d6299c4b2f1c667a21a75efd"
	I0110 01:56:27.272041   11618 cri.go:96] found id: "a458547631bdc917e613986d8f7ac6fd7565359fe88471bbce824f1874e6ec32"
	I0110 01:56:27.272045   11618 cri.go:96] found id: "abd9f763f3604b1b1ed81f007194c72054c1a43886832027733e3a1212a8a3b9"
	I0110 01:56:27.272048   11618 cri.go:96] found id: "e27ae252f277861a588db1ed9829c8aa537b82b2413c3d13c83bffbb69308234"
	I0110 01:56:27.272051   11618 cri.go:96] found id: "a9693cc77cf0c484c4c27fa1cdee8533ce5f5d13d0d4db3bbb53a00565b6f118"
	I0110 01:56:27.272058   11618 cri.go:96] found id: "2c8af457d0f780608fcdfa1b8cb701ac07004f54e4c6aa0a8e3274b148b5e61b"
	I0110 01:56:27.272066   11618 cri.go:96] found id: "dbe5d7a996f35eb0c94f70151b1bf92153b3fec733f1ecc9a041cb88ee462f6a"
	I0110 01:56:27.272069   11618 cri.go:96] found id: "a23567b0b791f5de297c9aef90ff8140157bc0649af00ce0e5e30e85bd9ad5ab"
	I0110 01:56:27.272072   11618 cri.go:96] found id: "8bce067de04e9ddb264d6b93bf83726a56f1d31349205b41d7b0321df22fc8d2"
	I0110 01:56:27.272076   11618 cri.go:96] found id: "245c82556c09186d3976774f81d9cd646157daeb5a2ffd046ba607941618606e"
	I0110 01:56:27.272079   11618 cri.go:96] found id: "b6868c2a5d4ce10096791acf437ee992cfdbd960bd8a366942e1351e6d349ca0"
	I0110 01:56:27.272088   11618 cri.go:96] found id: "3e0c2863ef6c21cbabb9e19c9f8f04a86e18a362494488884d4f1773f7580030"
	I0110 01:56:27.272092   11618 cri.go:96] found id: "27448255d1e1c54f8723ee7cca1526b639c1aebd34bdb2d75396a1ac1817885e"
	I0110 01:56:27.272099   11618 cri.go:96] found id: "6f32539141278af9c24103ee0c0a5e69ce237dbbe3695b92b0c5bf3956b30002"
	I0110 01:56:27.272107   11618 cri.go:96] found id: "5a226148b0ac79fc9d466c6da91dac92efac9abb8d69f4c01f8ce6b764e6b7bc"
	I0110 01:56:27.272110   11618 cri.go:96] found id: "014b6f8800d7d2127c3f3e57c97f5ed5d5a2361c80253dc74bd9f42bf604e9c2"
	I0110 01:56:27.272115   11618 cri.go:96] found id: "c951c20416799b3f43a759d9a52dedde3ee480d19bc9e255bb3cc1c62a5a8d25"
	I0110 01:56:27.272118   11618 cri.go:96] found id: "1c96ddd0e76bb0b8b444708102feeda0babec3472e7b33767118eb64739b9136"
	I0110 01:56:27.272122   11618 cri.go:96] found id: "80a58bbf677ab4ba0d96a2e9411138aa345db6101e0e1af2e9ea8754ffd39109"
	I0110 01:56:27.272125   11618 cri.go:96] found id: ""
	I0110 01:56:27.272172   11618 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:56:27.287214   11618 out.go:203] 
	W0110 01:56:27.290241   11618 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:56:27.290280   11618 out.go:285] * 
	* 
	W0110 01:56:27.292024   11618 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:56:27.295007   11618 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-106930 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.59s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.55s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 4.267513ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-106930
addons_test.go:334: (dbg) Run:  kubectl --context addons-106930 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-106930 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-106930 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (281.872875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:56:56.447096   13460 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:56:56.447275   13460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:56.447286   13460 out.go:374] Setting ErrFile to fd 2...
	I0110 01:56:56.447292   13460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:56.447537   13460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 01:56:56.447858   13460 mustload.go:66] Loading cluster: addons-106930
	I0110 01:56:56.448260   13460 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:56.448279   13460 addons.go:622] checking whether the cluster is paused
	I0110 01:56:56.448409   13460 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:56.448424   13460 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:56:56.448957   13460 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:56:56.468957   13460 ssh_runner.go:195] Run: systemctl --version
	I0110 01:56:56.469031   13460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:56:56.487051   13460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:56:56.590605   13460 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:56:56.590759   13460 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:56:56.633357   13460 cri.go:96] found id: "5dafbb5913003c3cb4147e90f1dde30311fabfa04e53c2fe9e8b29032265715e"
	I0110 01:56:56.633420   13460 cri.go:96] found id: "54899c00ceb5888bd4e38adcc09f61d6f4613e853ecdd6b32e4c5bb99d581e72"
	I0110 01:56:56.633439   13460 cri.go:96] found id: "6b948bf13d65c047739bccae4ed739b9f7fa926df29d62f31737a93200f79453"
	I0110 01:56:56.633457   13460 cri.go:96] found id: "7eebd1b695c51c83b430b93742ee73b2ca9993855903b7e6f7ae9d85c8d2fc50"
	I0110 01:56:56.633475   13460 cri.go:96] found id: "d95a18b705959f30d3471889d63995aa4063cde1d6299c4b2f1c667a21a75efd"
	I0110 01:56:56.633505   13460 cri.go:96] found id: "a458547631bdc917e613986d8f7ac6fd7565359fe88471bbce824f1874e6ec32"
	I0110 01:56:56.633529   13460 cri.go:96] found id: "abd9f763f3604b1b1ed81f007194c72054c1a43886832027733e3a1212a8a3b9"
	I0110 01:56:56.633549   13460 cri.go:96] found id: "e27ae252f277861a588db1ed9829c8aa537b82b2413c3d13c83bffbb69308234"
	I0110 01:56:56.633569   13460 cri.go:96] found id: "a9693cc77cf0c484c4c27fa1cdee8533ce5f5d13d0d4db3bbb53a00565b6f118"
	I0110 01:56:56.633591   13460 cri.go:96] found id: "2c8af457d0f780608fcdfa1b8cb701ac07004f54e4c6aa0a8e3274b148b5e61b"
	I0110 01:56:56.633624   13460 cri.go:96] found id: "dbe5d7a996f35eb0c94f70151b1bf92153b3fec733f1ecc9a041cb88ee462f6a"
	I0110 01:56:56.633641   13460 cri.go:96] found id: "a23567b0b791f5de297c9aef90ff8140157bc0649af00ce0e5e30e85bd9ad5ab"
	I0110 01:56:56.633658   13460 cri.go:96] found id: "8bce067de04e9ddb264d6b93bf83726a56f1d31349205b41d7b0321df22fc8d2"
	I0110 01:56:56.633678   13460 cri.go:96] found id: "245c82556c09186d3976774f81d9cd646157daeb5a2ffd046ba607941618606e"
	I0110 01:56:56.633708   13460 cri.go:96] found id: "b6868c2a5d4ce10096791acf437ee992cfdbd960bd8a366942e1351e6d349ca0"
	I0110 01:56:56.633750   13460 cri.go:96] found id: "3e0c2863ef6c21cbabb9e19c9f8f04a86e18a362494488884d4f1773f7580030"
	I0110 01:56:56.633772   13460 cri.go:96] found id: "27448255d1e1c54f8723ee7cca1526b639c1aebd34bdb2d75396a1ac1817885e"
	I0110 01:56:56.633796   13460 cri.go:96] found id: "6f32539141278af9c24103ee0c0a5e69ce237dbbe3695b92b0c5bf3956b30002"
	I0110 01:56:56.633826   13460 cri.go:96] found id: "5a226148b0ac79fc9d466c6da91dac92efac9abb8d69f4c01f8ce6b764e6b7bc"
	I0110 01:56:56.633847   13460 cri.go:96] found id: "014b6f8800d7d2127c3f3e57c97f5ed5d5a2361c80253dc74bd9f42bf604e9c2"
	I0110 01:56:56.633866   13460 cri.go:96] found id: "c951c20416799b3f43a759d9a52dedde3ee480d19bc9e255bb3cc1c62a5a8d25"
	I0110 01:56:56.633884   13460 cri.go:96] found id: "1c96ddd0e76bb0b8b444708102feeda0babec3472e7b33767118eb64739b9136"
	I0110 01:56:56.633902   13460 cri.go:96] found id: "80a58bbf677ab4ba0d96a2e9411138aa345db6101e0e1af2e9ea8754ffd39109"
	I0110 01:56:56.633929   13460 cri.go:96] found id: ""
	I0110 01:56:56.634009   13460 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:56:56.660445   13460 out.go:203] 
	W0110 01:56:56.663224   13460 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:56:56.663247   13460 out.go:285] * 
	* 
	W0110 01:56:56.664980   13460 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:56:56.667894   13460 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-106930 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.55s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (8.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-106930 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-106930 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-106930 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [3d76f49b-6132-4f3f-8a4a-b293017c5dfa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [3d76f49b-6132-4f3f-8a4a-b293017c5dfa] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 6.003354308s
I0110 01:56:54.486409    4168 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-106930 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-106930 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-106930 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-106930 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-106930 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (260.048461ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:56:55.547435   13326 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:56:55.547617   13326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:55.547630   13326 out.go:374] Setting ErrFile to fd 2...
	I0110 01:56:55.547635   13326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:55.548040   13326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 01:56:55.548380   13326 mustload.go:66] Loading cluster: addons-106930
	I0110 01:56:55.548989   13326 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:55.549012   13326 addons.go:622] checking whether the cluster is paused
	I0110 01:56:55.549140   13326 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:55.549159   13326 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:56:55.549873   13326 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:56:55.569570   13326 ssh_runner.go:195] Run: systemctl --version
	I0110 01:56:55.569624   13326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:56:55.587315   13326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:56:55.691930   13326 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:56:55.692058   13326 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:56:55.733629   13326 cri.go:96] found id: "5dafbb5913003c3cb4147e90f1dde30311fabfa04e53c2fe9e8b29032265715e"
	I0110 01:56:55.733691   13326 cri.go:96] found id: "54899c00ceb5888bd4e38adcc09f61d6f4613e853ecdd6b32e4c5bb99d581e72"
	I0110 01:56:55.733712   13326 cri.go:96] found id: "6b948bf13d65c047739bccae4ed739b9f7fa926df29d62f31737a93200f79453"
	I0110 01:56:55.733733   13326 cri.go:96] found id: "7eebd1b695c51c83b430b93742ee73b2ca9993855903b7e6f7ae9d85c8d2fc50"
	I0110 01:56:55.733769   13326 cri.go:96] found id: "d95a18b705959f30d3471889d63995aa4063cde1d6299c4b2f1c667a21a75efd"
	I0110 01:56:55.733790   13326 cri.go:96] found id: "a458547631bdc917e613986d8f7ac6fd7565359fe88471bbce824f1874e6ec32"
	I0110 01:56:55.733806   13326 cri.go:96] found id: "abd9f763f3604b1b1ed81f007194c72054c1a43886832027733e3a1212a8a3b9"
	I0110 01:56:55.733823   13326 cri.go:96] found id: "e27ae252f277861a588db1ed9829c8aa537b82b2413c3d13c83bffbb69308234"
	I0110 01:56:55.733843   13326 cri.go:96] found id: "a9693cc77cf0c484c4c27fa1cdee8533ce5f5d13d0d4db3bbb53a00565b6f118"
	I0110 01:56:55.733874   13326 cri.go:96] found id: "2c8af457d0f780608fcdfa1b8cb701ac07004f54e4c6aa0a8e3274b148b5e61b"
	I0110 01:56:55.733900   13326 cri.go:96] found id: "dbe5d7a996f35eb0c94f70151b1bf92153b3fec733f1ecc9a041cb88ee462f6a"
	I0110 01:56:55.733921   13326 cri.go:96] found id: "a23567b0b791f5de297c9aef90ff8140157bc0649af00ce0e5e30e85bd9ad5ab"
	I0110 01:56:55.733943   13326 cri.go:96] found id: "8bce067de04e9ddb264d6b93bf83726a56f1d31349205b41d7b0321df22fc8d2"
	I0110 01:56:55.733963   13326 cri.go:96] found id: "245c82556c09186d3976774f81d9cd646157daeb5a2ffd046ba607941618606e"
	I0110 01:56:55.733994   13326 cri.go:96] found id: "b6868c2a5d4ce10096791acf437ee992cfdbd960bd8a366942e1351e6d349ca0"
	I0110 01:56:55.734029   13326 cri.go:96] found id: "3e0c2863ef6c21cbabb9e19c9f8f04a86e18a362494488884d4f1773f7580030"
	I0110 01:56:55.734049   13326 cri.go:96] found id: "27448255d1e1c54f8723ee7cca1526b639c1aebd34bdb2d75396a1ac1817885e"
	I0110 01:56:55.734079   13326 cri.go:96] found id: "6f32539141278af9c24103ee0c0a5e69ce237dbbe3695b92b0c5bf3956b30002"
	I0110 01:56:55.734113   13326 cri.go:96] found id: "5a226148b0ac79fc9d466c6da91dac92efac9abb8d69f4c01f8ce6b764e6b7bc"
	I0110 01:56:55.734133   13326 cri.go:96] found id: "014b6f8800d7d2127c3f3e57c97f5ed5d5a2361c80253dc74bd9f42bf604e9c2"
	I0110 01:56:55.734157   13326 cri.go:96] found id: "c951c20416799b3f43a759d9a52dedde3ee480d19bc9e255bb3cc1c62a5a8d25"
	I0110 01:56:55.734186   13326 cri.go:96] found id: "1c96ddd0e76bb0b8b444708102feeda0babec3472e7b33767118eb64739b9136"
	I0110 01:56:55.734213   13326 cri.go:96] found id: "80a58bbf677ab4ba0d96a2e9411138aa345db6101e0e1af2e9ea8754ffd39109"
	I0110 01:56:55.734234   13326 cri.go:96] found id: ""
	I0110 01:56:55.734316   13326 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:56:55.753676   13326 out.go:203] 
	W0110 01:56:55.756892   13326 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:56:55.756917   13326 out.go:285] * 
	* 
	W0110 01:56:55.758750   13326 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:56:55.762697   13326 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-106930 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-106930 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-106930 addons disable ingress --alsologtostderr -v=1: exit status 11 (352.951847ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:56:55.864021   13385 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:56:55.864264   13385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:55.864288   13385 out.go:374] Setting ErrFile to fd 2...
	I0110 01:56:55.864307   13385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:55.864595   13385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 01:56:55.864904   13385 mustload.go:66] Loading cluster: addons-106930
	I0110 01:56:55.865295   13385 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:55.865331   13385 addons.go:622] checking whether the cluster is paused
	I0110 01:56:55.865462   13385 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:55.865488   13385 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:56:55.866011   13385 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:56:55.889163   13385 ssh_runner.go:195] Run: systemctl --version
	I0110 01:56:55.889222   13385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:56:55.920000   13385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:56:56.031571   13385 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:56:56.031668   13385 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:56:56.085951   13385 cri.go:96] found id: "5dafbb5913003c3cb4147e90f1dde30311fabfa04e53c2fe9e8b29032265715e"
	I0110 01:56:56.085969   13385 cri.go:96] found id: "54899c00ceb5888bd4e38adcc09f61d6f4613e853ecdd6b32e4c5bb99d581e72"
	I0110 01:56:56.085974   13385 cri.go:96] found id: "6b948bf13d65c047739bccae4ed739b9f7fa926df29d62f31737a93200f79453"
	I0110 01:56:56.085977   13385 cri.go:96] found id: "7eebd1b695c51c83b430b93742ee73b2ca9993855903b7e6f7ae9d85c8d2fc50"
	I0110 01:56:56.085983   13385 cri.go:96] found id: "d95a18b705959f30d3471889d63995aa4063cde1d6299c4b2f1c667a21a75efd"
	I0110 01:56:56.085987   13385 cri.go:96] found id: "a458547631bdc917e613986d8f7ac6fd7565359fe88471bbce824f1874e6ec32"
	I0110 01:56:56.085990   13385 cri.go:96] found id: "abd9f763f3604b1b1ed81f007194c72054c1a43886832027733e3a1212a8a3b9"
	I0110 01:56:56.085993   13385 cri.go:96] found id: "e27ae252f277861a588db1ed9829c8aa537b82b2413c3d13c83bffbb69308234"
	I0110 01:56:56.085996   13385 cri.go:96] found id: "a9693cc77cf0c484c4c27fa1cdee8533ce5f5d13d0d4db3bbb53a00565b6f118"
	I0110 01:56:56.086002   13385 cri.go:96] found id: "2c8af457d0f780608fcdfa1b8cb701ac07004f54e4c6aa0a8e3274b148b5e61b"
	I0110 01:56:56.086010   13385 cri.go:96] found id: "dbe5d7a996f35eb0c94f70151b1bf92153b3fec733f1ecc9a041cb88ee462f6a"
	I0110 01:56:56.086014   13385 cri.go:96] found id: "a23567b0b791f5de297c9aef90ff8140157bc0649af00ce0e5e30e85bd9ad5ab"
	I0110 01:56:56.086017   13385 cri.go:96] found id: "8bce067de04e9ddb264d6b93bf83726a56f1d31349205b41d7b0321df22fc8d2"
	I0110 01:56:56.086019   13385 cri.go:96] found id: "245c82556c09186d3976774f81d9cd646157daeb5a2ffd046ba607941618606e"
	I0110 01:56:56.086022   13385 cri.go:96] found id: "b6868c2a5d4ce10096791acf437ee992cfdbd960bd8a366942e1351e6d349ca0"
	I0110 01:56:56.086027   13385 cri.go:96] found id: "3e0c2863ef6c21cbabb9e19c9f8f04a86e18a362494488884d4f1773f7580030"
	I0110 01:56:56.086030   13385 cri.go:96] found id: "27448255d1e1c54f8723ee7cca1526b639c1aebd34bdb2d75396a1ac1817885e"
	I0110 01:56:56.086034   13385 cri.go:96] found id: "6f32539141278af9c24103ee0c0a5e69ce237dbbe3695b92b0c5bf3956b30002"
	I0110 01:56:56.086036   13385 cri.go:96] found id: "5a226148b0ac79fc9d466c6da91dac92efac9abb8d69f4c01f8ce6b764e6b7bc"
	I0110 01:56:56.086039   13385 cri.go:96] found id: "014b6f8800d7d2127c3f3e57c97f5ed5d5a2361c80253dc74bd9f42bf604e9c2"
	I0110 01:56:56.086044   13385 cri.go:96] found id: "c951c20416799b3f43a759d9a52dedde3ee480d19bc9e255bb3cc1c62a5a8d25"
	I0110 01:56:56.086047   13385 cri.go:96] found id: "1c96ddd0e76bb0b8b444708102feeda0babec3472e7b33767118eb64739b9136"
	I0110 01:56:56.086050   13385 cri.go:96] found id: "80a58bbf677ab4ba0d96a2e9411138aa345db6101e0e1af2e9ea8754ffd39109"
	I0110 01:56:56.086053   13385 cri.go:96] found id: ""
	I0110 01:56:56.086101   13385 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:56:56.107711   13385 out.go:203] 
	W0110 01:56:56.110848   13385 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:56:56.110925   13385 out.go:285] * 
	* 
	W0110 01:56:56.112823   13385 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:56:56.115978   13385 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-106930 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (8.40s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-fzvff" [8cb1f832-df20-448b-b4bd-4124ed5328bb] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003947859s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-106930 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-106930 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (244.016584ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:56:47.522630   12820 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:56:47.523490   12820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:47.523522   12820 out.go:374] Setting ErrFile to fd 2...
	I0110 01:56:47.523544   12820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:47.523891   12820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 01:56:47.524254   12820 mustload.go:66] Loading cluster: addons-106930
	I0110 01:56:47.524681   12820 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:47.524721   12820 addons.go:622] checking whether the cluster is paused
	I0110 01:56:47.524896   12820 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:47.524927   12820 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:56:47.525517   12820 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:56:47.541804   12820 ssh_runner.go:195] Run: systemctl --version
	I0110 01:56:47.541854   12820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:56:47.558052   12820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:56:47.663138   12820 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:56:47.663276   12820 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:56:47.697722   12820 cri.go:96] found id: "5dafbb5913003c3cb4147e90f1dde30311fabfa04e53c2fe9e8b29032265715e"
	I0110 01:56:47.697745   12820 cri.go:96] found id: "54899c00ceb5888bd4e38adcc09f61d6f4613e853ecdd6b32e4c5bb99d581e72"
	I0110 01:56:47.697750   12820 cri.go:96] found id: "6b948bf13d65c047739bccae4ed739b9f7fa926df29d62f31737a93200f79453"
	I0110 01:56:47.697753   12820 cri.go:96] found id: "7eebd1b695c51c83b430b93742ee73b2ca9993855903b7e6f7ae9d85c8d2fc50"
	I0110 01:56:47.697757   12820 cri.go:96] found id: "d95a18b705959f30d3471889d63995aa4063cde1d6299c4b2f1c667a21a75efd"
	I0110 01:56:47.697760   12820 cri.go:96] found id: "a458547631bdc917e613986d8f7ac6fd7565359fe88471bbce824f1874e6ec32"
	I0110 01:56:47.697763   12820 cri.go:96] found id: "abd9f763f3604b1b1ed81f007194c72054c1a43886832027733e3a1212a8a3b9"
	I0110 01:56:47.697766   12820 cri.go:96] found id: "e27ae252f277861a588db1ed9829c8aa537b82b2413c3d13c83bffbb69308234"
	I0110 01:56:47.697769   12820 cri.go:96] found id: "a9693cc77cf0c484c4c27fa1cdee8533ce5f5d13d0d4db3bbb53a00565b6f118"
	I0110 01:56:47.697775   12820 cri.go:96] found id: "2c8af457d0f780608fcdfa1b8cb701ac07004f54e4c6aa0a8e3274b148b5e61b"
	I0110 01:56:47.697778   12820 cri.go:96] found id: "dbe5d7a996f35eb0c94f70151b1bf92153b3fec733f1ecc9a041cb88ee462f6a"
	I0110 01:56:47.697786   12820 cri.go:96] found id: "a23567b0b791f5de297c9aef90ff8140157bc0649af00ce0e5e30e85bd9ad5ab"
	I0110 01:56:47.697789   12820 cri.go:96] found id: "8bce067de04e9ddb264d6b93bf83726a56f1d31349205b41d7b0321df22fc8d2"
	I0110 01:56:47.697793   12820 cri.go:96] found id: "245c82556c09186d3976774f81d9cd646157daeb5a2ffd046ba607941618606e"
	I0110 01:56:47.697796   12820 cri.go:96] found id: "b6868c2a5d4ce10096791acf437ee992cfdbd960bd8a366942e1351e6d349ca0"
	I0110 01:56:47.697801   12820 cri.go:96] found id: "3e0c2863ef6c21cbabb9e19c9f8f04a86e18a362494488884d4f1773f7580030"
	I0110 01:56:47.697804   12820 cri.go:96] found id: "27448255d1e1c54f8723ee7cca1526b639c1aebd34bdb2d75396a1ac1817885e"
	I0110 01:56:47.697808   12820 cri.go:96] found id: "6f32539141278af9c24103ee0c0a5e69ce237dbbe3695b92b0c5bf3956b30002"
	I0110 01:56:47.697811   12820 cri.go:96] found id: "5a226148b0ac79fc9d466c6da91dac92efac9abb8d69f4c01f8ce6b764e6b7bc"
	I0110 01:56:47.697814   12820 cri.go:96] found id: "014b6f8800d7d2127c3f3e57c97f5ed5d5a2361c80253dc74bd9f42bf604e9c2"
	I0110 01:56:47.697818   12820 cri.go:96] found id: "c951c20416799b3f43a759d9a52dedde3ee480d19bc9e255bb3cc1c62a5a8d25"
	I0110 01:56:47.697821   12820 cri.go:96] found id: "1c96ddd0e76bb0b8b444708102feeda0babec3472e7b33767118eb64739b9136"
	I0110 01:56:47.697824   12820 cri.go:96] found id: "80a58bbf677ab4ba0d96a2e9411138aa345db6101e0e1af2e9ea8754ffd39109"
	I0110 01:56:47.697827   12820 cri.go:96] found id: ""
	I0110 01:56:47.697903   12820 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:56:47.712651   12820 out.go:203] 
	W0110 01:56:47.715638   12820 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:56:47.715674   12820 out.go:285] * 
	* 
	W0110 01:56:47.717396   12820 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:56:47.720378   12820 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-106930 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 7.588992ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-b4gv8" [7d1af253-4564-484c-93ef-eb8b40ae57ef] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004287217s
addons_test.go:465: (dbg) Run:  kubectl --context addons-106930 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-106930 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-106930 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (309.879796ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:56:41.223059   12713 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:56:41.224222   12713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:41.224235   12713 out.go:374] Setting ErrFile to fd 2...
	I0110 01:56:41.224241   12713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:41.224548   12713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 01:56:41.224872   12713 mustload.go:66] Loading cluster: addons-106930
	I0110 01:56:41.225273   12713 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:41.225290   12713 addons.go:622] checking whether the cluster is paused
	I0110 01:56:41.225429   12713 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:41.225444   12713 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:56:41.226001   12713 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:56:41.248174   12713 ssh_runner.go:195] Run: systemctl --version
	I0110 01:56:41.248265   12713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:56:41.268323   12713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:56:41.386793   12713 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:56:41.386899   12713 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:56:41.442944   12713 cri.go:96] found id: "5dafbb5913003c3cb4147e90f1dde30311fabfa04e53c2fe9e8b29032265715e"
	I0110 01:56:41.442968   12713 cri.go:96] found id: "54899c00ceb5888bd4e38adcc09f61d6f4613e853ecdd6b32e4c5bb99d581e72"
	I0110 01:56:41.442974   12713 cri.go:96] found id: "6b948bf13d65c047739bccae4ed739b9f7fa926df29d62f31737a93200f79453"
	I0110 01:56:41.442978   12713 cri.go:96] found id: "7eebd1b695c51c83b430b93742ee73b2ca9993855903b7e6f7ae9d85c8d2fc50"
	I0110 01:56:41.442981   12713 cri.go:96] found id: "d95a18b705959f30d3471889d63995aa4063cde1d6299c4b2f1c667a21a75efd"
	I0110 01:56:41.442985   12713 cri.go:96] found id: "a458547631bdc917e613986d8f7ac6fd7565359fe88471bbce824f1874e6ec32"
	I0110 01:56:41.443014   12713 cri.go:96] found id: "abd9f763f3604b1b1ed81f007194c72054c1a43886832027733e3a1212a8a3b9"
	I0110 01:56:41.443026   12713 cri.go:96] found id: "e27ae252f277861a588db1ed9829c8aa537b82b2413c3d13c83bffbb69308234"
	I0110 01:56:41.443030   12713 cri.go:96] found id: "a9693cc77cf0c484c4c27fa1cdee8533ce5f5d13d0d4db3bbb53a00565b6f118"
	I0110 01:56:41.443037   12713 cri.go:96] found id: "2c8af457d0f780608fcdfa1b8cb701ac07004f54e4c6aa0a8e3274b148b5e61b"
	I0110 01:56:41.443044   12713 cri.go:96] found id: "dbe5d7a996f35eb0c94f70151b1bf92153b3fec733f1ecc9a041cb88ee462f6a"
	I0110 01:56:41.443048   12713 cri.go:96] found id: "a23567b0b791f5de297c9aef90ff8140157bc0649af00ce0e5e30e85bd9ad5ab"
	I0110 01:56:41.443051   12713 cri.go:96] found id: "8bce067de04e9ddb264d6b93bf83726a56f1d31349205b41d7b0321df22fc8d2"
	I0110 01:56:41.443055   12713 cri.go:96] found id: "245c82556c09186d3976774f81d9cd646157daeb5a2ffd046ba607941618606e"
	I0110 01:56:41.443058   12713 cri.go:96] found id: "b6868c2a5d4ce10096791acf437ee992cfdbd960bd8a366942e1351e6d349ca0"
	I0110 01:56:41.443069   12713 cri.go:96] found id: "3e0c2863ef6c21cbabb9e19c9f8f04a86e18a362494488884d4f1773f7580030"
	I0110 01:56:41.443088   12713 cri.go:96] found id: "27448255d1e1c54f8723ee7cca1526b639c1aebd34bdb2d75396a1ac1817885e"
	I0110 01:56:41.443100   12713 cri.go:96] found id: "6f32539141278af9c24103ee0c0a5e69ce237dbbe3695b92b0c5bf3956b30002"
	I0110 01:56:41.443105   12713 cri.go:96] found id: "5a226148b0ac79fc9d466c6da91dac92efac9abb8d69f4c01f8ce6b764e6b7bc"
	I0110 01:56:41.443108   12713 cri.go:96] found id: "014b6f8800d7d2127c3f3e57c97f5ed5d5a2361c80253dc74bd9f42bf604e9c2"
	I0110 01:56:41.443120   12713 cri.go:96] found id: "c951c20416799b3f43a759d9a52dedde3ee480d19bc9e255bb3cc1c62a5a8d25"
	I0110 01:56:41.443124   12713 cri.go:96] found id: "1c96ddd0e76bb0b8b444708102feeda0babec3472e7b33767118eb64739b9136"
	I0110 01:56:41.443127   12713 cri.go:96] found id: "80a58bbf677ab4ba0d96a2e9411138aa345db6101e0e1af2e9ea8754ffd39109"
	I0110 01:56:41.443130   12713 cri.go:96] found id: ""
	I0110 01:56:41.443200   12713 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:56:41.461573   12713 out.go:203] 
	W0110 01:56:41.465576   12713 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:56:41.465624   12713 out.go:285] * 
	* 
	W0110 01:56:41.467383   12713 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:56:41.471019   12713 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-106930 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.46s)

                                                
                                    
x
+
TestAddons/parallel/CSI (34.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0110 01:56:32.648950    4168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0110 01:56:32.656336    4168 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0110 01:56:32.656360    4168 kapi.go:107] duration metric: took 7.424475ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 7.432878ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-106930 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-106930 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [c87f2a3f-d87f-4c35-8deb-02fc7800aa63] Pending
helpers_test.go:353: "task-pv-pod" [c87f2a3f-d87f-4c35-8deb-02fc7800aa63] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [c87f2a3f-d87f-4c35-8deb-02fc7800aa63] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.018069347s
addons_test.go:574: (dbg) Run:  kubectl --context addons-106930 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-106930 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-106930 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-106930 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-106930 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-106930 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-106930 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [7d0fcb33-6c9c-4f8b-825c-335f44e76f55] Pending
helpers_test.go:353: "task-pv-pod-restore" [7d0fcb33-6c9c-4f8b-825c-335f44e76f55] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [7d0fcb33-6c9c-4f8b-825c-335f44e76f55] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003165045s
addons_test.go:616: (dbg) Run:  kubectl --context addons-106930 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-106930 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-106930 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-106930 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-106930 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (283.218171ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:57:06.396568   13737 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:57:06.396818   13737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:57:06.396830   13737 out.go:374] Setting ErrFile to fd 2...
	I0110 01:57:06.396836   13737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:57:06.397148   13737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 01:57:06.397486   13737 mustload.go:66] Loading cluster: addons-106930
	I0110 01:57:06.397908   13737 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:57:06.397930   13737 addons.go:622] checking whether the cluster is paused
	I0110 01:57:06.398116   13737 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:57:06.398135   13737 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:57:06.398985   13737 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:57:06.420154   13737 ssh_runner.go:195] Run: systemctl --version
	I0110 01:57:06.420214   13737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:57:06.442415   13737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:57:06.554487   13737 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:57:06.554569   13737 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:57:06.598132   13737 cri.go:96] found id: "5dafbb5913003c3cb4147e90f1dde30311fabfa04e53c2fe9e8b29032265715e"
	I0110 01:57:06.598154   13737 cri.go:96] found id: "54899c00ceb5888bd4e38adcc09f61d6f4613e853ecdd6b32e4c5bb99d581e72"
	I0110 01:57:06.598160   13737 cri.go:96] found id: "6b948bf13d65c047739bccae4ed739b9f7fa926df29d62f31737a93200f79453"
	I0110 01:57:06.598164   13737 cri.go:96] found id: "7eebd1b695c51c83b430b93742ee73b2ca9993855903b7e6f7ae9d85c8d2fc50"
	I0110 01:57:06.598167   13737 cri.go:96] found id: "d95a18b705959f30d3471889d63995aa4063cde1d6299c4b2f1c667a21a75efd"
	I0110 01:57:06.598171   13737 cri.go:96] found id: "a458547631bdc917e613986d8f7ac6fd7565359fe88471bbce824f1874e6ec32"
	I0110 01:57:06.598180   13737 cri.go:96] found id: "abd9f763f3604b1b1ed81f007194c72054c1a43886832027733e3a1212a8a3b9"
	I0110 01:57:06.598184   13737 cri.go:96] found id: "e27ae252f277861a588db1ed9829c8aa537b82b2413c3d13c83bffbb69308234"
	I0110 01:57:06.598188   13737 cri.go:96] found id: "a9693cc77cf0c484c4c27fa1cdee8533ce5f5d13d0d4db3bbb53a00565b6f118"
	I0110 01:57:06.598199   13737 cri.go:96] found id: "2c8af457d0f780608fcdfa1b8cb701ac07004f54e4c6aa0a8e3274b148b5e61b"
	I0110 01:57:06.598207   13737 cri.go:96] found id: "dbe5d7a996f35eb0c94f70151b1bf92153b3fec733f1ecc9a041cb88ee462f6a"
	I0110 01:57:06.598215   13737 cri.go:96] found id: "a23567b0b791f5de297c9aef90ff8140157bc0649af00ce0e5e30e85bd9ad5ab"
	I0110 01:57:06.598225   13737 cri.go:96] found id: "8bce067de04e9ddb264d6b93bf83726a56f1d31349205b41d7b0321df22fc8d2"
	I0110 01:57:06.598228   13737 cri.go:96] found id: "245c82556c09186d3976774f81d9cd646157daeb5a2ffd046ba607941618606e"
	I0110 01:57:06.598231   13737 cri.go:96] found id: "b6868c2a5d4ce10096791acf437ee992cfdbd960bd8a366942e1351e6d349ca0"
	I0110 01:57:06.598245   13737 cri.go:96] found id: "3e0c2863ef6c21cbabb9e19c9f8f04a86e18a362494488884d4f1773f7580030"
	I0110 01:57:06.598259   13737 cri.go:96] found id: "27448255d1e1c54f8723ee7cca1526b639c1aebd34bdb2d75396a1ac1817885e"
	I0110 01:57:06.598265   13737 cri.go:96] found id: "6f32539141278af9c24103ee0c0a5e69ce237dbbe3695b92b0c5bf3956b30002"
	I0110 01:57:06.598268   13737 cri.go:96] found id: "5a226148b0ac79fc9d466c6da91dac92efac9abb8d69f4c01f8ce6b764e6b7bc"
	I0110 01:57:06.598272   13737 cri.go:96] found id: "014b6f8800d7d2127c3f3e57c97f5ed5d5a2361c80253dc74bd9f42bf604e9c2"
	I0110 01:57:06.598278   13737 cri.go:96] found id: "c951c20416799b3f43a759d9a52dedde3ee480d19bc9e255bb3cc1c62a5a8d25"
	I0110 01:57:06.598284   13737 cri.go:96] found id: "1c96ddd0e76bb0b8b444708102feeda0babec3472e7b33767118eb64739b9136"
	I0110 01:57:06.598288   13737 cri.go:96] found id: "80a58bbf677ab4ba0d96a2e9411138aa345db6101e0e1af2e9ea8754ffd39109"
	I0110 01:57:06.598295   13737 cri.go:96] found id: ""
	I0110 01:57:06.598353   13737 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:57:06.617061   13737 out.go:203] 
	W0110 01:57:06.619938   13737 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:57:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:57:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:57:06.619971   13737 out.go:285] * 
	* 
	W0110 01:57:06.621816   13737 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:57:06.624796   13737 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-106930 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-106930 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-106930 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (262.778202ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:57:06.691244   13792 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:57:06.691476   13792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:57:06.691507   13792 out.go:374] Setting ErrFile to fd 2...
	I0110 01:57:06.691527   13792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:57:06.691826   13792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 01:57:06.692306   13792 mustload.go:66] Loading cluster: addons-106930
	I0110 01:57:06.692735   13792 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:57:06.692780   13792 addons.go:622] checking whether the cluster is paused
	I0110 01:57:06.692938   13792 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:57:06.692971   13792 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:57:06.693536   13792 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:57:06.711480   13792 ssh_runner.go:195] Run: systemctl --version
	I0110 01:57:06.711543   13792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:57:06.729056   13792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:57:06.834171   13792 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:57:06.834258   13792 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:57:06.865107   13792 cri.go:96] found id: "5dafbb5913003c3cb4147e90f1dde30311fabfa04e53c2fe9e8b29032265715e"
	I0110 01:57:06.865135   13792 cri.go:96] found id: "54899c00ceb5888bd4e38adcc09f61d6f4613e853ecdd6b32e4c5bb99d581e72"
	I0110 01:57:06.865140   13792 cri.go:96] found id: "6b948bf13d65c047739bccae4ed739b9f7fa926df29d62f31737a93200f79453"
	I0110 01:57:06.865144   13792 cri.go:96] found id: "7eebd1b695c51c83b430b93742ee73b2ca9993855903b7e6f7ae9d85c8d2fc50"
	I0110 01:57:06.865147   13792 cri.go:96] found id: "d95a18b705959f30d3471889d63995aa4063cde1d6299c4b2f1c667a21a75efd"
	I0110 01:57:06.865151   13792 cri.go:96] found id: "a458547631bdc917e613986d8f7ac6fd7565359fe88471bbce824f1874e6ec32"
	I0110 01:57:06.865174   13792 cri.go:96] found id: "abd9f763f3604b1b1ed81f007194c72054c1a43886832027733e3a1212a8a3b9"
	I0110 01:57:06.865183   13792 cri.go:96] found id: "e27ae252f277861a588db1ed9829c8aa537b82b2413c3d13c83bffbb69308234"
	I0110 01:57:06.865187   13792 cri.go:96] found id: "a9693cc77cf0c484c4c27fa1cdee8533ce5f5d13d0d4db3bbb53a00565b6f118"
	I0110 01:57:06.865201   13792 cri.go:96] found id: "2c8af457d0f780608fcdfa1b8cb701ac07004f54e4c6aa0a8e3274b148b5e61b"
	I0110 01:57:06.865208   13792 cri.go:96] found id: "dbe5d7a996f35eb0c94f70151b1bf92153b3fec733f1ecc9a041cb88ee462f6a"
	I0110 01:57:06.865212   13792 cri.go:96] found id: "a23567b0b791f5de297c9aef90ff8140157bc0649af00ce0e5e30e85bd9ad5ab"
	I0110 01:57:06.865215   13792 cri.go:96] found id: "8bce067de04e9ddb264d6b93bf83726a56f1d31349205b41d7b0321df22fc8d2"
	I0110 01:57:06.865218   13792 cri.go:96] found id: "245c82556c09186d3976774f81d9cd646157daeb5a2ffd046ba607941618606e"
	I0110 01:57:06.865221   13792 cri.go:96] found id: "b6868c2a5d4ce10096791acf437ee992cfdbd960bd8a366942e1351e6d349ca0"
	I0110 01:57:06.865231   13792 cri.go:96] found id: "3e0c2863ef6c21cbabb9e19c9f8f04a86e18a362494488884d4f1773f7580030"
	I0110 01:57:06.865235   13792 cri.go:96] found id: "27448255d1e1c54f8723ee7cca1526b639c1aebd34bdb2d75396a1ac1817885e"
	I0110 01:57:06.865252   13792 cri.go:96] found id: "6f32539141278af9c24103ee0c0a5e69ce237dbbe3695b92b0c5bf3956b30002"
	I0110 01:57:06.865263   13792 cri.go:96] found id: "5a226148b0ac79fc9d466c6da91dac92efac9abb8d69f4c01f8ce6b764e6b7bc"
	I0110 01:57:06.865267   13792 cri.go:96] found id: "014b6f8800d7d2127c3f3e57c97f5ed5d5a2361c80253dc74bd9f42bf604e9c2"
	I0110 01:57:06.865272   13792 cri.go:96] found id: "c951c20416799b3f43a759d9a52dedde3ee480d19bc9e255bb3cc1c62a5a8d25"
	I0110 01:57:06.865275   13792 cri.go:96] found id: "1c96ddd0e76bb0b8b444708102feeda0babec3472e7b33767118eb64739b9136"
	I0110 01:57:06.865278   13792 cri.go:96] found id: "80a58bbf677ab4ba0d96a2e9411138aa345db6101e0e1af2e9ea8754ffd39109"
	I0110 01:57:06.865292   13792 cri.go:96] found id: ""
	I0110 01:57:06.865350   13792 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:57:06.881256   13792 out.go:203] 
	W0110 01:57:06.884197   13792 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:57:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:57:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:57:06.884237   13792 out.go:285] * 
	* 
	W0110 01:57:06.885949   13792 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:57:06.888850   13792 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-106930 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (34.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-106930 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-106930 --alsologtostderr -v=1: exit status 11 (392.466509ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:56:32.742277   11943 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:56:32.742561   11943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:32.742588   11943 out.go:374] Setting ErrFile to fd 2...
	I0110 01:56:32.742606   11943 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:32.742881   11943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 01:56:32.743160   11943 mustload.go:66] Loading cluster: addons-106930
	I0110 01:56:32.743540   11943 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:32.743579   11943 addons.go:622] checking whether the cluster is paused
	I0110 01:56:32.743701   11943 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:32.743724   11943 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:56:32.744284   11943 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:56:32.785012   11943 ssh_runner.go:195] Run: systemctl --version
	I0110 01:56:32.785082   11943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:56:32.836058   11943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:56:32.944132   11943 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:56:32.944225   11943 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:56:32.986979   11943 cri.go:96] found id: "5dafbb5913003c3cb4147e90f1dde30311fabfa04e53c2fe9e8b29032265715e"
	I0110 01:56:32.986997   11943 cri.go:96] found id: "54899c00ceb5888bd4e38adcc09f61d6f4613e853ecdd6b32e4c5bb99d581e72"
	I0110 01:56:32.987002   11943 cri.go:96] found id: "6b948bf13d65c047739bccae4ed739b9f7fa926df29d62f31737a93200f79453"
	I0110 01:56:32.987006   11943 cri.go:96] found id: "7eebd1b695c51c83b430b93742ee73b2ca9993855903b7e6f7ae9d85c8d2fc50"
	I0110 01:56:32.987009   11943 cri.go:96] found id: "d95a18b705959f30d3471889d63995aa4063cde1d6299c4b2f1c667a21a75efd"
	I0110 01:56:32.987012   11943 cri.go:96] found id: "a458547631bdc917e613986d8f7ac6fd7565359fe88471bbce824f1874e6ec32"
	I0110 01:56:32.987015   11943 cri.go:96] found id: "abd9f763f3604b1b1ed81f007194c72054c1a43886832027733e3a1212a8a3b9"
	I0110 01:56:32.987018   11943 cri.go:96] found id: "e27ae252f277861a588db1ed9829c8aa537b82b2413c3d13c83bffbb69308234"
	I0110 01:56:32.987021   11943 cri.go:96] found id: "a9693cc77cf0c484c4c27fa1cdee8533ce5f5d13d0d4db3bbb53a00565b6f118"
	I0110 01:56:32.987027   11943 cri.go:96] found id: "2c8af457d0f780608fcdfa1b8cb701ac07004f54e4c6aa0a8e3274b148b5e61b"
	I0110 01:56:32.987039   11943 cri.go:96] found id: "dbe5d7a996f35eb0c94f70151b1bf92153b3fec733f1ecc9a041cb88ee462f6a"
	I0110 01:56:32.987043   11943 cri.go:96] found id: "a23567b0b791f5de297c9aef90ff8140157bc0649af00ce0e5e30e85bd9ad5ab"
	I0110 01:56:32.987046   11943 cri.go:96] found id: "8bce067de04e9ddb264d6b93bf83726a56f1d31349205b41d7b0321df22fc8d2"
	I0110 01:56:32.987049   11943 cri.go:96] found id: "245c82556c09186d3976774f81d9cd646157daeb5a2ffd046ba607941618606e"
	I0110 01:56:32.987052   11943 cri.go:96] found id: "b6868c2a5d4ce10096791acf437ee992cfdbd960bd8a366942e1351e6d349ca0"
	I0110 01:56:32.987056   11943 cri.go:96] found id: "3e0c2863ef6c21cbabb9e19c9f8f04a86e18a362494488884d4f1773f7580030"
	I0110 01:56:32.987059   11943 cri.go:96] found id: "27448255d1e1c54f8723ee7cca1526b639c1aebd34bdb2d75396a1ac1817885e"
	I0110 01:56:32.987063   11943 cri.go:96] found id: "6f32539141278af9c24103ee0c0a5e69ce237dbbe3695b92b0c5bf3956b30002"
	I0110 01:56:32.987066   11943 cri.go:96] found id: "5a226148b0ac79fc9d466c6da91dac92efac9abb8d69f4c01f8ce6b764e6b7bc"
	I0110 01:56:32.987069   11943 cri.go:96] found id: "014b6f8800d7d2127c3f3e57c97f5ed5d5a2361c80253dc74bd9f42bf604e9c2"
	I0110 01:56:32.987074   11943 cri.go:96] found id: "c951c20416799b3f43a759d9a52dedde3ee480d19bc9e255bb3cc1c62a5a8d25"
	I0110 01:56:32.987077   11943 cri.go:96] found id: "1c96ddd0e76bb0b8b444708102feeda0babec3472e7b33767118eb64739b9136"
	I0110 01:56:32.987079   11943 cri.go:96] found id: "80a58bbf677ab4ba0d96a2e9411138aa345db6101e0e1af2e9ea8754ffd39109"
	I0110 01:56:32.987082   11943 cri.go:96] found id: ""
	I0110 01:56:32.987135   11943 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:56:33.015113   11943 out.go:203] 
	W0110 01:56:33.018069   11943 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:56:33.018110   11943 out.go:285] * 
	* 
	W0110 01:56:33.019831   11943 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:56:33.022774   11943 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-106930 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-106930
helpers_test.go:244: (dbg) docker inspect addons-106930:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9d54a65e91dd178badebb1deb926e0202c95a318a422eeb8b0706eb93ffdf62e",
	        "Created": "2026-01-10T01:54:16.452793682Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5346,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T01:54:16.514987345Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/9d54a65e91dd178badebb1deb926e0202c95a318a422eeb8b0706eb93ffdf62e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9d54a65e91dd178badebb1deb926e0202c95a318a422eeb8b0706eb93ffdf62e/hostname",
	        "HostsPath": "/var/lib/docker/containers/9d54a65e91dd178badebb1deb926e0202c95a318a422eeb8b0706eb93ffdf62e/hosts",
	        "LogPath": "/var/lib/docker/containers/9d54a65e91dd178badebb1deb926e0202c95a318a422eeb8b0706eb93ffdf62e/9d54a65e91dd178badebb1deb926e0202c95a318a422eeb8b0706eb93ffdf62e-json.log",
	        "Name": "/addons-106930",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-106930:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-106930",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9d54a65e91dd178badebb1deb926e0202c95a318a422eeb8b0706eb93ffdf62e",
	                "LowerDir": "/var/lib/docker/overlay2/dee185b7d0ea8777d319cf32d2872e1740b606467453a00f8f0f18c73981b4af-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dee185b7d0ea8777d319cf32d2872e1740b606467453a00f8f0f18c73981b4af/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dee185b7d0ea8777d319cf32d2872e1740b606467453a00f8f0f18c73981b4af/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dee185b7d0ea8777d319cf32d2872e1740b606467453a00f8f0f18c73981b4af/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-106930",
	                "Source": "/var/lib/docker/volumes/addons-106930/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-106930",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-106930",
	                "name.minikube.sigs.k8s.io": "addons-106930",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3fda6974847874b5bd9b8981f6755bb736c7bc2ffe1cf150e4ef5788f72a35c9",
	            "SandboxKey": "/var/run/docker/netns/3fda69748478",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-106930": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:28:58:ee:3c:91",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "21a057cdcd08f15edc44365edc5359ae15c24f357d64fc26cc5dbe558c20775a",
	                    "EndpointID": "638715bfc9cc72db463598b0f66e3ac66fc0802ccdb64aba52ff9663e5aafe84",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-106930",
	                        "9d54a65e91dd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-106930 -n addons-106930
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-106930 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-106930 logs -n 25: (1.529601293s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-718348 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-718348   │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ delete  │ -p download-only-718348                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-718348   │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ start   │ -o=json --download-only -p download-only-521519 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-521519   │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ delete  │ -p download-only-521519                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-521519   │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ delete  │ -p download-only-718348                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-718348   │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ delete  │ -p download-only-521519                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-521519   │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ start   │ --download-only -p download-docker-857873 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-857873 │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │                     │
	│ delete  │ -p download-docker-857873                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-857873 │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ start   │ --download-only -p binary-mirror-933958 --alsologtostderr --binary-mirror http://127.0.0.1:44897 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-933958   │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │                     │
	│ delete  │ -p binary-mirror-933958                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-933958   │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ addons  │ disable dashboard -p addons-106930                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-106930          │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │                     │
	│ addons  │ enable dashboard -p addons-106930                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-106930          │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │                     │
	│ start   │ -p addons-106930 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-106930          │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:56 UTC │
	│ addons  │ addons-106930 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-106930          │ jenkins │ v1.37.0 │ 10 Jan 26 01:56 UTC │                     │
	│ addons  │ addons-106930 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-106930          │ jenkins │ v1.37.0 │ 10 Jan 26 01:56 UTC │                     │
	│ addons  │ addons-106930 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-106930          │ jenkins │ v1.37.0 │ 10 Jan 26 01:56 UTC │                     │
	│ addons  │ addons-106930 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-106930          │ jenkins │ v1.37.0 │ 10 Jan 26 01:56 UTC │                     │
	│ ip      │ addons-106930 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-106930          │ jenkins │ v1.37.0 │ 10 Jan 26 01:56 UTC │ 10 Jan 26 01:56 UTC │
	│ addons  │ addons-106930 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-106930          │ jenkins │ v1.37.0 │ 10 Jan 26 01:56 UTC │                     │
	│ ssh     │ addons-106930 ssh cat /opt/local-path-provisioner/pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-106930          │ jenkins │ v1.37.0 │ 10 Jan 26 01:56 UTC │ 10 Jan 26 01:56 UTC │
	│ addons  │ addons-106930 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-106930          │ jenkins │ v1.37.0 │ 10 Jan 26 01:56 UTC │                     │
	│ addons  │ addons-106930 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-106930          │ jenkins │ v1.37.0 │ 10 Jan 26 01:56 UTC │                     │
	│ addons  │ enable headlamp -p addons-106930 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-106930          │ jenkins │ v1.37.0 │ 10 Jan 26 01:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 01:53:50
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 01:53:50.830126    4942 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:53:50.830306    4942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:53:50.830335    4942 out.go:374] Setting ErrFile to fd 2...
	I0110 01:53:50.830354    4942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:53:50.830616    4942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 01:53:50.831107    4942 out.go:368] Setting JSON to false
	I0110 01:53:50.831848    4942 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2180,"bootTime":1768007851,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 01:53:50.831950    4942 start.go:143] virtualization:  
	I0110 01:53:50.835374    4942 out.go:179] * [addons-106930] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 01:53:50.838353    4942 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 01:53:50.838424    4942 notify.go:221] Checking for updates...
	I0110 01:53:50.844289    4942 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 01:53:50.847155    4942 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 01:53:50.850159    4942 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 01:53:50.853029    4942 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 01:53:50.855956    4942 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 01:53:50.859035    4942 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 01:53:50.887986    4942 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 01:53:50.888107    4942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 01:53:50.945024    4942 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2026-01-10 01:53:50.93524592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 01:53:50.945133    4942 docker.go:319] overlay module found
	I0110 01:53:50.948296    4942 out.go:179] * Using the docker driver based on user configuration
	I0110 01:53:50.951090    4942 start.go:309] selected driver: docker
	I0110 01:53:50.951104    4942 start.go:928] validating driver "docker" against <nil>
	I0110 01:53:50.951118    4942 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 01:53:50.951922    4942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 01:53:51.026928    4942 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2026-01-10 01:53:51.009212126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 01:53:51.027085    4942 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 01:53:51.027308    4942 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 01:53:51.030470    4942 out.go:179] * Using Docker driver with root privileges
	I0110 01:53:51.033411    4942 cni.go:84] Creating CNI manager for ""
	I0110 01:53:51.033481    4942 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 01:53:51.033491    4942 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 01:53:51.033569    4942 start.go:353] cluster config:
	{Name:addons-106930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-106930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s Rosetta:false}
	I0110 01:53:51.038569    4942 out.go:179] * Starting "addons-106930" primary control-plane node in "addons-106930" cluster
	I0110 01:53:51.041483    4942 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 01:53:51.044417    4942 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 01:53:51.047166    4942 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 01:53:51.047216    4942 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 01:53:51.047227    4942 cache.go:65] Caching tarball of preloaded images
	I0110 01:53:51.047263    4942 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 01:53:51.047332    4942 preload.go:251] Found /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 01:53:51.047343    4942 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 01:53:51.047700    4942 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/config.json ...
	I0110 01:53:51.047730    4942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/config.json: {Name:mka7fca32712a64f746b05a38742ffb0c3350861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:53:51.063371    4942 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 to local cache
	I0110 01:53:51.063494    4942 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local cache directory
	I0110 01:53:51.063513    4942 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local cache directory, skipping pull
	I0110 01:53:51.063518    4942 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in cache, skipping pull
	I0110 01:53:51.063524    4942 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 as a tarball
	I0110 01:53:51.063529    4942 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 from local cache
	I0110 01:54:09.286574    4942 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 from cached tarball
	I0110 01:54:09.286612    4942 cache.go:243] Successfully downloaded all kic artifacts
	I0110 01:54:09.286652    4942 start.go:360] acquireMachinesLock for addons-106930: {Name:mk478e48313fe9e9e7fa1523dacba9864ba25ea7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 01:54:09.286787    4942 start.go:364] duration metric: took 111.414µs to acquireMachinesLock for "addons-106930"
	I0110 01:54:09.286818    4942 start.go:93] Provisioning new machine with config: &{Name:addons-106930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-106930 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 01:54:09.286903    4942 start.go:125] createHost starting for "" (driver="docker")
	I0110 01:54:09.290320    4942 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0110 01:54:09.290543    4942 start.go:159] libmachine.API.Create for "addons-106930" (driver="docker")
	I0110 01:54:09.290581    4942 client.go:173] LocalClient.Create starting
	I0110 01:54:09.290687    4942 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem
	I0110 01:54:09.655412    4942 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem
	I0110 01:54:09.795070    4942 cli_runner.go:164] Run: docker network inspect addons-106930 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 01:54:09.810186    4942 cli_runner.go:211] docker network inspect addons-106930 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 01:54:09.810272    4942 network_create.go:284] running [docker network inspect addons-106930] to gather additional debugging logs...
	I0110 01:54:09.810292    4942 cli_runner.go:164] Run: docker network inspect addons-106930
	W0110 01:54:09.825855    4942 cli_runner.go:211] docker network inspect addons-106930 returned with exit code 1
	I0110 01:54:09.825885    4942 network_create.go:287] error running [docker network inspect addons-106930]: docker network inspect addons-106930: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-106930 not found
	I0110 01:54:09.825905    4942 network_create.go:289] output of [docker network inspect addons-106930]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-106930 not found
	
	** /stderr **
	I0110 01:54:09.826004    4942 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 01:54:09.841924    4942 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a055b0}
	I0110 01:54:09.841963    4942 network_create.go:124] attempt to create docker network addons-106930 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0110 01:54:09.842023    4942 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-106930 addons-106930
	I0110 01:54:09.898085    4942 network_create.go:108] docker network addons-106930 192.168.49.0/24 created
	I0110 01:54:09.898112    4942 kic.go:121] calculated static IP "192.168.49.2" for the "addons-106930" container
	I0110 01:54:09.898186    4942 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 01:54:09.917506    4942 cli_runner.go:164] Run: docker volume create addons-106930 --label name.minikube.sigs.k8s.io=addons-106930 --label created_by.minikube.sigs.k8s.io=true
	I0110 01:54:09.934436    4942 oci.go:103] Successfully created a docker volume addons-106930
	I0110 01:54:09.934524    4942 cli_runner.go:164] Run: docker run --rm --name addons-106930-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-106930 --entrypoint /usr/bin/test -v addons-106930:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 01:54:12.383948    4942 cli_runner.go:217] Completed: docker run --rm --name addons-106930-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-106930 --entrypoint /usr/bin/test -v addons-106930:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib: (2.449390775s)
	I0110 01:54:12.383982    4942 oci.go:107] Successfully prepared a docker volume addons-106930
	I0110 01:54:12.384030    4942 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 01:54:12.384046    4942 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 01:54:12.384102    4942 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-106930:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 01:54:16.380450    4942 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-106930:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.996304055s)
	I0110 01:54:16.380480    4942 kic.go:203] duration metric: took 3.996431665s to extract preloaded images to volume ...
	W0110 01:54:16.380682    4942 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 01:54:16.380806    4942 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 01:54:16.438204    4942 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-106930 --name addons-106930 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-106930 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-106930 --network addons-106930 --ip 192.168.49.2 --volume addons-106930:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 01:54:16.757936    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Running}}
	I0110 01:54:16.776990    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:16.800143    4942 cli_runner.go:164] Run: docker exec addons-106930 stat /var/lib/dpkg/alternatives/iptables
	I0110 01:54:16.852910    4942 oci.go:144] the created container "addons-106930" has a running status.
	I0110 01:54:16.852937    4942 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa...
	I0110 01:54:17.762258    4942 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 01:54:17.782462    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:17.798995    4942 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 01:54:17.799020    4942 kic_runner.go:114] Args: [docker exec --privileged addons-106930 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 01:54:17.837393    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:17.854672    4942 machine.go:94] provisionDockerMachine start ...
	I0110 01:54:17.854759    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:17.875214    4942 main.go:144] libmachine: Using SSH client type: native
	I0110 01:54:17.875530    4942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0110 01:54:17.875550    4942 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 01:54:17.876193    4942 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50120->127.0.0.1:32768: read: connection reset by peer
	I0110 01:54:21.023398    4942 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-106930
	
	I0110 01:54:21.023424    4942 ubuntu.go:182] provisioning hostname "addons-106930"
	I0110 01:54:21.023489    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:21.041431    4942 main.go:144] libmachine: Using SSH client type: native
	I0110 01:54:21.041748    4942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0110 01:54:21.041767    4942 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-106930 && echo "addons-106930" | sudo tee /etc/hostname
	I0110 01:54:21.198455    4942 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-106930
	
	I0110 01:54:21.198537    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:21.214810    4942 main.go:144] libmachine: Using SSH client type: native
	I0110 01:54:21.215122    4942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0110 01:54:21.215142    4942 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-106930' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-106930/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-106930' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 01:54:21.359742    4942 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 01:54:21.359848    4942 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 01:54:21.359877    4942 ubuntu.go:190] setting up certificates
	I0110 01:54:21.359887    4942 provision.go:84] configureAuth start
	I0110 01:54:21.359960    4942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-106930
	I0110 01:54:21.376919    4942 provision.go:143] copyHostCerts
	I0110 01:54:21.376996    4942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 01:54:21.377138    4942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 01:54:21.377226    4942 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 01:54:21.377282    4942 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.addons-106930 san=[127.0.0.1 192.168.49.2 addons-106930 localhost minikube]
	I0110 01:54:21.731402    4942 provision.go:177] copyRemoteCerts
	I0110 01:54:21.731468    4942 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 01:54:21.731506    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:21.750315    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:21.855739    4942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 01:54:21.873250    4942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0110 01:54:21.889638    4942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 01:54:21.906690    4942 provision.go:87] duration metric: took 546.781407ms to configureAuth
	I0110 01:54:21.906722    4942 ubuntu.go:206] setting minikube options for container-runtime
	I0110 01:54:21.906907    4942 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:54:21.907005    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:21.924718    4942 main.go:144] libmachine: Using SSH client type: native
	I0110 01:54:21.925053    4942 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0110 01:54:21.925076    4942 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 01:54:22.235080    4942 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 01:54:22.235099    4942 machine.go:97] duration metric: took 4.380409884s to provisionDockerMachine
	I0110 01:54:22.235110    4942 client.go:176] duration metric: took 12.944518674s to LocalClient.Create
	I0110 01:54:22.235122    4942 start.go:167] duration metric: took 12.944579283s to libmachine.API.Create "addons-106930"
	I0110 01:54:22.235129    4942 start.go:293] postStartSetup for "addons-106930" (driver="docker")
	I0110 01:54:22.235142    4942 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 01:54:22.235207    4942 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 01:54:22.235266    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:22.253880    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:22.355628    4942 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 01:54:22.358685    4942 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 01:54:22.358737    4942 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 01:54:22.358750    4942 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 01:54:22.358817    4942 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 01:54:22.358840    4942 start.go:296] duration metric: took 123.699702ms for postStartSetup
	I0110 01:54:22.359132    4942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-106930
	I0110 01:54:22.375726    4942 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/config.json ...
	I0110 01:54:22.376041    4942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 01:54:22.376095    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:22.393002    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:22.492344    4942 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 01:54:22.496560    4942 start.go:128] duration metric: took 13.209643634s to createHost
	I0110 01:54:22.496581    4942 start.go:83] releasing machines lock for "addons-106930", held for 13.209779859s
	I0110 01:54:22.496646    4942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-106930
	I0110 01:54:22.514514    4942 ssh_runner.go:195] Run: cat /version.json
	I0110 01:54:22.514567    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:22.514575    4942 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 01:54:22.514638    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:22.535381    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:22.544173    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:22.743989    4942 ssh_runner.go:195] Run: systemctl --version
	I0110 01:54:22.750156    4942 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 01:54:22.784805    4942 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 01:54:22.788868    4942 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 01:54:22.788944    4942 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 01:54:22.815166    4942 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 01:54:22.815193    4942 start.go:496] detecting cgroup driver to use...
	I0110 01:54:22.815225    4942 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 01:54:22.815275    4942 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 01:54:22.831254    4942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 01:54:22.844106    4942 docker.go:218] disabling cri-docker service (if available) ...
	I0110 01:54:22.844169    4942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 01:54:22.862012    4942 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 01:54:22.879727    4942 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 01:54:23.002641    4942 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 01:54:23.123940    4942 docker.go:234] disabling docker service ...
	I0110 01:54:23.124009    4942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 01:54:23.144539    4942 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 01:54:23.157700    4942 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 01:54:23.282733    4942 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 01:54:23.410002    4942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 01:54:23.422147    4942 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 01:54:23.435579    4942 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 01:54:23.435694    4942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 01:54:23.443761    4942 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 01:54:23.443937    4942 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 01:54:23.452345    4942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 01:54:23.460414    4942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 01:54:23.468497    4942 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 01:54:23.476776    4942 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 01:54:23.485551    4942 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 01:54:23.498558    4942 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 01:54:23.507103    4942 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 01:54:23.514521    4942 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0110 01:54:23.514585    4942 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0110 01:54:23.528405    4942 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 01:54:23.536261    4942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 01:54:23.656162    4942 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 01:54:23.818671    4942 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 01:54:23.818821    4942 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 01:54:23.822488    4942 start.go:574] Will wait 60s for crictl version
	I0110 01:54:23.822599    4942 ssh_runner.go:195] Run: which crictl
	I0110 01:54:23.826022    4942 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 01:54:23.856349    4942 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 01:54:23.856487    4942 ssh_runner.go:195] Run: crio --version
	I0110 01:54:23.886530    4942 ssh_runner.go:195] Run: crio --version
	I0110 01:54:23.920774    4942 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 01:54:23.923586    4942 cli_runner.go:164] Run: docker network inspect addons-106930 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 01:54:23.940089    4942 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0110 01:54:23.943832    4942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 01:54:23.953467    4942 kubeadm.go:884] updating cluster {Name:addons-106930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-106930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 01:54:23.953585    4942 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 01:54:23.953644    4942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 01:54:23.993172    4942 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 01:54:23.993204    4942 crio.go:433] Images already preloaded, skipping extraction
	I0110 01:54:23.993261    4942 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 01:54:24.024767    4942 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 01:54:24.024793    4942 cache_images.go:86] Images are preloaded, skipping loading
	I0110 01:54:24.024802    4942 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I0110 01:54:24.024933    4942 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-106930 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:addons-106930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 01:54:24.025019    4942 ssh_runner.go:195] Run: crio config
	I0110 01:54:24.096121    4942 cni.go:84] Creating CNI manager for ""
	I0110 01:54:24.096147    4942 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 01:54:24.096168    4942 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 01:54:24.096191    4942 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-106930 NodeName:addons-106930 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 01:54:24.096318    4942 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-106930"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 01:54:24.096396    4942 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 01:54:24.104285    4942 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 01:54:24.104358    4942 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 01:54:24.111666    4942 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0110 01:54:24.123776    4942 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 01:54:24.136414    4942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I0110 01:54:24.148796    4942 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0110 01:54:24.152763    4942 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 01:54:24.162095    4942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 01:54:24.284155    4942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 01:54:24.298595    4942 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930 for IP: 192.168.49.2
	I0110 01:54:24.298616    4942 certs.go:195] generating shared ca certs ...
	I0110 01:54:24.298651    4942 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:54:24.298808    4942 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 01:54:25.168612    4942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt ...
	I0110 01:54:25.168640    4942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt: {Name:mkc3c586dc863b09f91708368e643110a4078337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:54:25.168812    4942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key ...
	I0110 01:54:25.168820    4942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key: {Name:mkd4f49f62a57a945ed2c4704d77e2e5644d7d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:54:25.168897    4942 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 01:54:25.398316    4942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt ...
	I0110 01:54:25.398346    4942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt: {Name:mk019683c317bd1a1ee0a9db74bb65b7543be383 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:54:25.398525    4942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key ...
	I0110 01:54:25.398537    4942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key: {Name:mkb270b489a3932a099533fb1f4bceabadc241d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:54:25.398617    4942 certs.go:257] generating profile certs ...
	I0110 01:54:25.398678    4942 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.key
	I0110 01:54:25.398703    4942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt with IP's: []
	I0110 01:54:25.633574    4942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt ...
	I0110 01:54:25.633606    4942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: {Name:mk2cf0582e2d0e764e5f66447d69ce0ce431b21e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:54:25.633784    4942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.key ...
	I0110 01:54:25.633798    4942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.key: {Name:mk314672fad1a24dccfbaaf06f73d6d3e2771071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:54:25.633884    4942 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/apiserver.key.86bafc70
	I0110 01:54:25.633909    4942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/apiserver.crt.86bafc70 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0110 01:54:26.056686    4942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/apiserver.crt.86bafc70 ...
	I0110 01:54:26.056716    4942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/apiserver.crt.86bafc70: {Name:mk3f41ee55f81d0450036f968af3ac111f13361e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:54:26.056908    4942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/apiserver.key.86bafc70 ...
	I0110 01:54:26.056931    4942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/apiserver.key.86bafc70: {Name:mk04fc99cbcade562fcd9934711d1baf6168ceff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:54:26.057018    4942 certs.go:382] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/apiserver.crt.86bafc70 -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/apiserver.crt
	I0110 01:54:26.057092    4942 certs.go:386] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/apiserver.key.86bafc70 -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/apiserver.key
	I0110 01:54:26.057145    4942 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/proxy-client.key
	I0110 01:54:26.057166    4942 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/proxy-client.crt with IP's: []
	I0110 01:54:26.294208    4942 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/proxy-client.crt ...
	I0110 01:54:26.294239    4942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/proxy-client.crt: {Name:mkf3d7f75f439a2cbd6a953c6a79f01ea2d11328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:54:26.294411    4942 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/proxy-client.key ...
	I0110 01:54:26.294423    4942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/proxy-client.key: {Name:mkd42300062c2d25dd448906fce1d93463ba6e73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:54:26.294596    4942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 01:54:26.294641    4942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 01:54:26.294672    4942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 01:54:26.294710    4942 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 01:54:26.295241    4942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 01:54:26.311725    4942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 01:54:26.329648    4942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 01:54:26.346591    4942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 01:54:26.362999    4942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0110 01:54:26.380039    4942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 01:54:26.396398    4942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 01:54:26.412720    4942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 01:54:26.429873    4942 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 01:54:26.446889    4942 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 01:54:26.459500    4942 ssh_runner.go:195] Run: openssl version
	I0110 01:54:26.465634    4942 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 01:54:26.473243    4942 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 01:54:26.480652    4942 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 01:54:26.484288    4942 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 01:54:26.484354    4942 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 01:54:26.525067    4942 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 01:54:26.532177    4942 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 01:54:26.539240    4942 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 01:54:26.542791    4942 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 01:54:26.542848    4942 kubeadm.go:401] StartCluster: {Name:addons-106930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-106930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 01:54:26.542918    4942 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:54:26.542990    4942 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:54:26.569739    4942 cri.go:96] found id: ""
	I0110 01:54:26.569804    4942 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 01:54:26.577065    4942 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 01:54:26.584437    4942 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 01:54:26.584501    4942 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 01:54:26.591728    4942 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 01:54:26.591747    4942 kubeadm.go:158] found existing configuration files:
	
	I0110 01:54:26.591887    4942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 01:54:26.599432    4942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 01:54:26.599507    4942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 01:54:26.606561    4942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 01:54:26.613760    4942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 01:54:26.613822    4942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 01:54:26.620943    4942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 01:54:26.628101    4942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 01:54:26.628172    4942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 01:54:26.634940    4942 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 01:54:26.642178    4942 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 01:54:26.642241    4942 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 01:54:26.649086    4942 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 01:54:26.702251    4942 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 01:54:26.702313    4942 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 01:54:26.781518    4942 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 01:54:26.781593    4942 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 01:54:26.781633    4942 kubeadm.go:319] OS: Linux
	I0110 01:54:26.781683    4942 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 01:54:26.781735    4942 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 01:54:26.781789    4942 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 01:54:26.781841    4942 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 01:54:26.781893    4942 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 01:54:26.781945    4942 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 01:54:26.781993    4942 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 01:54:26.782045    4942 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 01:54:26.782094    4942 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 01:54:26.846820    4942 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 01:54:26.846948    4942 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 01:54:26.847051    4942 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 01:54:26.854472    4942 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 01:54:26.858608    4942 out.go:252]   - Generating certificates and keys ...
	I0110 01:54:26.858707    4942 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 01:54:26.858777    4942 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 01:54:26.960882    4942 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 01:54:27.430372    4942 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 01:54:27.501517    4942 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 01:54:27.624396    4942 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 01:54:27.956421    4942 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 01:54:27.956572    4942 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-106930 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0110 01:54:28.031122    4942 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 01:54:28.031280    4942 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-106930 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0110 01:54:28.802972    4942 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 01:54:28.894654    4942 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 01:54:29.849318    4942 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 01:54:29.849557    4942 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 01:54:30.089665    4942 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 01:54:30.480494    4942 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 01:54:30.724581    4942 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 01:54:30.981290    4942 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 01:54:31.081425    4942 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 01:54:31.082399    4942 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 01:54:31.085292    4942 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 01:54:31.088652    4942 out.go:252]   - Booting up control plane ...
	I0110 01:54:31.088763    4942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 01:54:31.088842    4942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 01:54:31.090342    4942 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 01:54:31.113742    4942 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 01:54:31.113853    4942 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 01:54:31.123407    4942 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 01:54:31.123507    4942 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 01:54:31.123548    4942 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 01:54:31.264229    4942 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 01:54:31.264342    4942 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 01:54:32.260561    4942 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00086093s
	I0110 01:54:32.263915    4942 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 01:54:32.264002    4942 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0110 01:54:32.264086    4942 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 01:54:32.264159    4942 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0110 01:54:33.775421    4942 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.511096844s
	I0110 01:54:35.226045    4942 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.961997397s
	I0110 01:54:37.267286    4942 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.002980589s
	I0110 01:54:37.306601    4942 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 01:54:37.329999    4942 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 01:54:37.344607    4942 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 01:54:37.344821    4942 kubeadm.go:319] [mark-control-plane] Marking the node addons-106930 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 01:54:37.356539    4942 kubeadm.go:319] [bootstrap-token] Using token: fbqlmg.o39ddeagvfv117mh
	I0110 01:54:37.359936    4942 out.go:252]   - Configuring RBAC rules ...
	I0110 01:54:37.360062    4942 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 01:54:37.365007    4942 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 01:54:37.374990    4942 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 01:54:37.379172    4942 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 01:54:37.383300    4942 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 01:54:37.387341    4942 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 01:54:37.675666    4942 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 01:54:38.102053    4942 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 01:54:38.674400    4942 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 01:54:38.675428    4942 kubeadm.go:319] 
	I0110 01:54:38.675511    4942 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 01:54:38.675524    4942 kubeadm.go:319] 
	I0110 01:54:38.675601    4942 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 01:54:38.675609    4942 kubeadm.go:319] 
	I0110 01:54:38.675634    4942 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 01:54:38.675696    4942 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 01:54:38.675749    4942 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 01:54:38.675757    4942 kubeadm.go:319] 
	I0110 01:54:38.675831    4942 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 01:54:38.675841    4942 kubeadm.go:319] 
	I0110 01:54:38.675889    4942 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 01:54:38.675897    4942 kubeadm.go:319] 
	I0110 01:54:38.675948    4942 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 01:54:38.676027    4942 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 01:54:38.676099    4942 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 01:54:38.676106    4942 kubeadm.go:319] 
	I0110 01:54:38.676205    4942 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 01:54:38.676287    4942 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 01:54:38.676295    4942 kubeadm.go:319] 
	I0110 01:54:38.676377    4942 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fbqlmg.o39ddeagvfv117mh \
	I0110 01:54:38.676484    4942 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e531c0ad449fbe8263f118458e677222d5af5abf33ad8c77cbc1152855f23f5 \
	I0110 01:54:38.676507    4942 kubeadm.go:319] 	--control-plane 
	I0110 01:54:38.676514    4942 kubeadm.go:319] 
	I0110 01:54:38.676598    4942 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 01:54:38.676606    4942 kubeadm.go:319] 
	I0110 01:54:38.676688    4942 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fbqlmg.o39ddeagvfv117mh \
	I0110 01:54:38.677006    4942 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e531c0ad449fbe8263f118458e677222d5af5abf33ad8c77cbc1152855f23f5 
	I0110 01:54:38.680427    4942 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 01:54:38.680841    4942 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 01:54:38.680955    4942 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 01:54:38.680975    4942 cni.go:84] Creating CNI manager for ""
	I0110 01:54:38.680983    4942 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 01:54:38.684040    4942 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0110 01:54:38.686841    4942 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 01:54:38.690485    4942 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 01:54:38.690500    4942 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 01:54:38.704706    4942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 01:54:38.979771    4942 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 01:54:38.979922    4942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:54:38.980009    4942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-106930 minikube.k8s.io/updated_at=2026_01_10T01_54_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510 minikube.k8s.io/name=addons-106930 minikube.k8s.io/primary=true
	I0110 01:54:39.178038    4942 ops.go:34] apiserver oom_adj: -16
	I0110 01:54:39.178169    4942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:54:39.678927    4942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:54:40.179191    4942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:54:40.678468    4942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:54:41.178176    4942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:54:41.678854    4942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:54:42.178352    4942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:54:42.678258    4942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:54:43.178893    4942 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 01:54:43.340797    4942 kubeadm.go:1114] duration metric: took 4.36091744s to wait for elevateKubeSystemPrivileges
	I0110 01:54:43.340826    4942 kubeadm.go:403] duration metric: took 16.797981496s to StartCluster
	I0110 01:54:43.340852    4942 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:54:43.340977    4942 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 01:54:43.341369    4942 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:54:43.341584    4942 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 01:54:43.341729    4942 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 01:54:43.341999    4942 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:54:43.342039    4942 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0110 01:54:43.342115    4942 addons.go:70] Setting yakd=true in profile "addons-106930"
	I0110 01:54:43.342132    4942 addons.go:239] Setting addon yakd=true in "addons-106930"
	I0110 01:54:43.342152    4942 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:54:43.342729    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:43.343181    4942 addons.go:70] Setting inspektor-gadget=true in profile "addons-106930"
	I0110 01:54:43.343202    4942 addons.go:239] Setting addon inspektor-gadget=true in "addons-106930"
	I0110 01:54:43.343224    4942 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:54:43.343370    4942 addons.go:70] Setting metrics-server=true in profile "addons-106930"
	I0110 01:54:43.343407    4942 addons.go:239] Setting addon metrics-server=true in "addons-106930"
	I0110 01:54:43.343472    4942 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:54:43.343671    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:43.344138    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:43.344702    4942 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-106930"
	I0110 01:54:43.344729    4942 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-106930"
	I0110 01:54:43.344752    4942 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:54:43.345163    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:43.348011    4942 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-106930"
	I0110 01:54:43.348043    4942 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-106930"
	I0110 01:54:43.348078    4942 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:54:43.348631    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:43.361033    4942 addons.go:70] Setting cloud-spanner=true in profile "addons-106930"
	I0110 01:54:43.361076    4942 addons.go:239] Setting addon cloud-spanner=true in "addons-106930"
	I0110 01:54:43.361123    4942 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:54:43.361677    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:43.366569    4942 addons.go:70] Setting registry=true in profile "addons-106930"
	I0110 01:54:43.366753    4942 addons.go:239] Setting addon registry=true in "addons-106930"
	I0110 01:54:43.366823    4942 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:54:43.367455    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:43.380197    4942 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-106930"
	I0110 01:54:43.380292    4942 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-106930"
	I0110 01:54:43.380325    4942 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:54:43.380861    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:43.394375    4942 addons.go:70] Setting registry-creds=true in profile "addons-106930"
	I0110 01:54:43.394447    4942 addons.go:239] Setting addon registry-creds=true in "addons-106930"
	I0110 01:54:43.394500    4942 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:54:43.395001    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:43.419924    4942 addons.go:70] Setting storage-provisioner=true in profile "addons-106930"
	I0110 01:54:43.419991    4942 addons.go:239] Setting addon storage-provisioner=true in "addons-106930"
	I0110 01:54:43.420043    4942 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:54:43.420557    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:43.420796    4942 addons.go:70] Setting default-storageclass=true in profile "addons-106930"
	I0110 01:54:43.420823    4942 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-106930"
	I0110 01:54:43.421483    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:43.440101    4942 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-106930"
	I0110 01:54:43.442844    4942 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-106930"
	I0110 01:54:43.443197    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:43.441646    4942 addons.go:70] Setting gcp-auth=true in profile "addons-106930"
	I0110 01:54:43.462591    4942 mustload.go:66] Loading cluster: addons-106930
	I0110 01:54:43.462818    4942 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:54:43.463073    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:43.477113    4942 out.go:179] * Verifying Kubernetes components...
	I0110 01:54:43.480671    4942 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 01:54:43.441655    4942 addons.go:70] Setting ingress=true in profile "addons-106930"
	I0110 01:54:43.495955    4942 addons.go:239] Setting addon ingress=true in "addons-106930"
	I0110 01:54:43.495993    4942 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:54:43.496514    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:43.499712    4942 out.go:179]   - Using image ghcr.io/manusa/yakd:0.0.7
	I0110 01:54:43.503621    4942 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0110 01:54:43.503684    4942 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0110 01:54:43.503778    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:43.441662    4942 addons.go:70] Setting ingress-dns=true in profile "addons-106930"
	I0110 01:54:43.509630    4942 addons.go:239] Setting addon ingress-dns=true in "addons-106930"
	I0110 01:54:43.509691    4942 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:54:43.510264    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:43.528634    4942 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0110 01:54:43.442668    4942 addons.go:70] Setting volcano=true in profile "addons-106930"
	I0110 01:54:43.535063    4942 addons.go:239] Setting addon volcano=true in "addons-106930"
	I0110 01:54:43.535117    4942 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:54:43.535638    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:43.442682    4942 addons.go:70] Setting volumesnapshots=true in profile "addons-106930"
	I0110 01:54:43.535882    4942 addons.go:239] Setting addon volumesnapshots=true in "addons-106930"
	I0110 01:54:43.535914    4942 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:54:43.536342    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:43.568484    4942 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I0110 01:54:43.575384    4942 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I0110 01:54:43.610530    4942 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0110 01:54:43.610793    4942 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0110 01:54:43.666061    4942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0110 01:54:43.666138    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:43.646567    4942 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0110 01:54:43.672915    4942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0110 01:54:43.672987    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:43.679908    4942 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:54:43.681486    4942 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0110 01:54:43.681534    4942 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0110 01:54:43.681619    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:43.646597    4942 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0110 01:54:43.693349    4942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0110 01:54:43.693418    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:43.705998    4942 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I0110 01:54:43.706105    4942 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0110 01:54:43.710065    4942 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0110 01:54:43.713525    4942 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0110 01:54:43.713678    4942 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0110 01:54:43.713708    4942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0110 01:54:43.713799    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:43.715433    4942 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0110 01:54:43.715460    4942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0110 01:54:43.715554    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:43.657714    4942 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-106930"
	I0110 01:54:43.728846    4942 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:54:43.729354    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:43.658391    4942 addons.go:239] Setting addon default-storageclass=true in "addons-106930"
	I0110 01:54:43.745857    4942 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:54:43.746376    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:43.753188    4942 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I0110 01:54:43.753207    4942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0110 01:54:43.753281    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:43.761620    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:43.777159    4942 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I0110 01:54:43.777807    4942 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0110 01:54:43.809672    4942 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0110 01:54:43.814828    4942 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0110 01:54:43.814970    4942 out.go:179]   - Using image docker.io/registry:3.0.0
	I0110 01:54:43.818820    4942 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0110 01:54:43.819443    4942 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I0110 01:54:43.819513    4942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0110 01:54:43.819610    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:43.825202    4942 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0110 01:54:43.828124    4942 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0110 01:54:43.831047    4942 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 01:54:43.831062    4942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 01:54:43.831141    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:43.862983    4942 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0110 01:54:43.866867    4942 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I0110 01:54:43.867140    4942 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0110 01:54:43.867155    4942 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0110 01:54:43.867229    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:43.886858    4942 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I0110 01:54:43.887051    4942 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0110 01:54:43.892747    4942 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0110 01:54:43.892775    4942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16257 bytes)
	I0110 01:54:43.892838    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:43.923665    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:43.924674    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:43.929231    4942 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 01:54:43.929252    4942 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 01:54:43.929318    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:43.935938    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:43.943626    4942 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0110 01:54:43.948427    4942 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0110 01:54:43.956054    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:43.957089    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:43.957318    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:43.958024    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:43.959958    4942 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0110 01:54:43.965804    4942 out.go:179]   - Using image docker.io/busybox:stable
	I0110 01:54:43.965933    4942 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0110 01:54:43.969300    4942 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0110 01:54:43.969321    4942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0110 01:54:43.969387    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:43.969610    4942 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0110 01:54:43.969624    4942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0110 01:54:43.969660    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:43.992543    4942 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 01:54:44.047740    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:44.057358    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:44.060911    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:44.074052    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:44.079153    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:44.093916    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:44.102709    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:44.126970    4942 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 01:54:44.638993    4942 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0110 01:54:44.639116    4942 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0110 01:54:44.729894    4942 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0110 01:54:44.729915    4942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0110 01:54:44.788013    4942 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0110 01:54:44.788124    4942 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0110 01:54:44.829376    4942 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0110 01:54:44.829443    4942 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0110 01:54:44.884128    4942 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0110 01:54:44.884186    4942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2013 bytes)
	I0110 01:54:44.892073    4942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0110 01:54:44.892570    4942 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0110 01:54:44.892610    4942 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0110 01:54:44.901715    4942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 01:54:44.908566    4942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0110 01:54:44.940491    4942 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0110 01:54:44.940661    4942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0110 01:54:44.960214    4942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0110 01:54:44.991713    4942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0110 01:54:45.059959    4942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0110 01:54:45.101846    4942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0110 01:54:45.115318    4942 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0110 01:54:45.115405    4942 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0110 01:54:45.134039    4942 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I0110 01:54:45.134200    4942 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0110 01:54:45.174668    4942 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0110 01:54:45.174755    4942 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0110 01:54:45.206359    4942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0110 01:54:45.271927    4942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I0110 01:54:45.336339    4942 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0110 01:54:45.336461    4942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0110 01:54:45.481036    4942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 01:54:45.486752    4942 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0110 01:54:45.486826    4942 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0110 01:54:45.516417    4942 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0110 01:54:45.516487    4942 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0110 01:54:45.542845    4942 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0110 01:54:45.542905    4942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0110 01:54:45.561570    4942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0110 01:54:45.649862    4942 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0110 01:54:45.649932    4942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0110 01:54:45.655535    4942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0110 01:54:45.767631    4942 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0110 01:54:45.767707    4942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0110 01:54:45.815527    4942 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0110 01:54:45.815605    4942 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0110 01:54:45.955824    4942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0110 01:54:46.002800    4942 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0110 01:54:46.002878    4942 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0110 01:54:46.347250    4942 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0110 01:54:46.347311    4942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0110 01:54:46.348983    4942 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0110 01:54:46.349042    4942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0110 01:54:46.697307    4942 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0110 01:54:46.697381    4942 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0110 01:54:46.714018    4942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0110 01:54:46.978371    4942 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0110 01:54:46.978441    4942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0110 01:54:47.109243    4942 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.116666551s)
	I0110 01:54:47.109269    4942 start.go:987] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0110 01:54:47.110115    4942 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.983123874s)
	I0110 01:54:47.110679    4942 node_ready.go:35] waiting up to 6m0s for node "addons-106930" to be "Ready" ...
	I0110 01:54:47.393925    4942 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0110 01:54:47.393994    4942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0110 01:54:47.564150    4942 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0110 01:54:47.564220    4942 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0110 01:54:47.613783    4942 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-106930" context rescaled to 1 replicas
	I0110 01:54:47.771931    4942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W0110 01:54:49.123566    4942 node_ready.go:57] node "addons-106930" has "Ready":"False" status (will retry)
	I0110 01:54:50.536441    4942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.644290107s)
	I0110 01:54:50.536494    4942 addons.go:495] Verifying addon ingress=true in "addons-106930"
	I0110 01:54:50.536705    4942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.634914011s)
	I0110 01:54:50.536928    4942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.628290326s)
	I0110 01:54:50.536996    4942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.576683601s)
	I0110 01:54:50.537031    4942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.545249479s)
	I0110 01:54:50.537080    4942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.477043115s)
	I0110 01:54:50.537214    4942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.435278899s)
	I0110 01:54:50.537308    4942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.330855598s)
	I0110 01:54:50.537368    4942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.265371162s)
	I0110 01:54:50.537411    4942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.056304718s)
	I0110 01:54:50.537436    4942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.975810617s)
	I0110 01:54:50.537486    4942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.881891254s)
	I0110 01:54:50.537494    4942 addons.go:495] Verifying addon metrics-server=true in "addons-106930"
	I0110 01:54:50.537520    4942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.581637443s)
	I0110 01:54:50.537526    4942 addons.go:495] Verifying addon registry=true in "addons-106930"
	I0110 01:54:50.537896    4942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.823794551s)
	W0110 01:54:50.539084    4942 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0110 01:54:50.539128    4942 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0110 01:54:50.539976    4942 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-106930 service yakd-dashboard -n yakd-dashboard
	
	I0110 01:54:50.540011    4942 out.go:179] * Verifying ingress addon...
	I0110 01:54:50.542020    4942 out.go:179] * Verifying registry addon...
	I0110 01:54:50.545173    4942 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0110 01:54:50.546721    4942 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0110 01:54:50.557626    4942 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0110 01:54:50.557649    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:50.558515    4942 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0110 01:54:50.558529    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0110 01:54:50.566352    4942 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0110 01:54:50.789498    4942 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.017475526s)
	I0110 01:54:50.789571    4942 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-106930"
	I0110 01:54:50.792527    4942 out.go:179] * Verifying csi-hostpath-driver addon...
	I0110 01:54:50.796212    4942 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0110 01:54:50.801631    4942 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0110 01:54:50.801661    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:50.861059    4942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0110 01:54:51.048294    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:51.050186    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:51.290453    4942 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0110 01:54:51.290585    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:51.302584    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:51.315936    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:51.443011    4942 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0110 01:54:51.469465    4942 addons.go:239] Setting addon gcp-auth=true in "addons-106930"
	I0110 01:54:51.469566    4942 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:54:51.470021    4942 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:54:51.492228    4942 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0110 01:54:51.492289    4942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:54:51.514474    4942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:54:51.548632    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:51.550555    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0110 01:54:51.614215    4942 node_ready.go:57] node "addons-106930" has "Ready":"False" status (will retry)
	I0110 01:54:51.634167    4942 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I0110 01:54:51.637079    4942 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0110 01:54:51.639916    4942 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0110 01:54:51.639941    4942 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0110 01:54:51.652268    4942 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0110 01:54:51.652327    4942 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0110 01:54:51.664569    4942 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0110 01:54:51.664592    4942 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0110 01:54:51.683416    4942 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0110 01:54:51.799436    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:52.050831    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:52.050968    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:52.192809    4942 addons.go:495] Verifying addon gcp-auth=true in "addons-106930"
	I0110 01:54:52.196006    4942 out.go:179] * Verifying gcp-auth addon...
	I0110 01:54:52.200572    4942 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0110 01:54:52.209438    4942 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0110 01:54:52.209515    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:52.299736    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:52.549966    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:52.550142    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:52.704140    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:52.799403    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:53.048589    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:53.050183    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:53.204358    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:53.299305    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:53.548457    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:53.550202    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:53.705002    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:53.799919    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:54.049550    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:54.049994    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0110 01:54:54.113542    4942 node_ready.go:57] node "addons-106930" has "Ready":"False" status (will retry)
	I0110 01:54:54.204427    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:54.298765    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:54.549038    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:54.549976    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:54.704397    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:54.798873    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:55.048014    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:55.050021    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:55.203224    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:55.299602    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:55.548559    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:55.549409    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:55.703975    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:55.799719    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:56.049360    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:56.050369    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0110 01:54:56.114002    4942 node_ready.go:57] node "addons-106930" has "Ready":"False" status (will retry)
	I0110 01:54:56.203730    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:56.299373    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:56.548357    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:56.549341    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:56.703727    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:56.800258    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:57.049533    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:57.050630    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:57.203775    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:57.299504    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:57.582668    4942 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0110 01:54:57.582693    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:57.583897    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:57.625935    4942 node_ready.go:49] node "addons-106930" is "Ready"
	I0110 01:54:57.625966    4942 node_ready.go:38] duration metric: took 10.515271513s for node "addons-106930" to be "Ready" ...
	I0110 01:54:57.625988    4942 api_server.go:52] waiting for apiserver process to appear ...
	I0110 01:54:57.626060    4942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 01:54:57.656925    4942 api_server.go:72] duration metric: took 14.315312381s to wait for apiserver process to appear ...
	I0110 01:54:57.656958    4942 api_server.go:88] waiting for apiserver healthz status ...
	I0110 01:54:57.656977    4942 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0110 01:54:57.681569    4942 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0110 01:54:57.686114    4942 api_server.go:141] control plane version: v1.35.0
	I0110 01:54:57.686142    4942 api_server.go:131] duration metric: took 29.177112ms to wait for apiserver health ...
	I0110 01:54:57.686152    4942 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 01:54:57.784748    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:57.785318    4942 system_pods.go:59] 19 kube-system pods found
	I0110 01:54:57.785343    4942 system_pods.go:61] "coredns-7d764666f9-84fwv" [c1d3f6bb-a5de-4390-8605-9db4703c28e6] Pending
	I0110 01:54:57.785349    4942 system_pods.go:61] "csi-hostpath-attacher-0" [a5f4327d-3614-4174-9036-70310051c8a2] Pending
	I0110 01:54:57.785357    4942 system_pods.go:61] "csi-hostpath-resizer-0" [1ac31177-3b83-4a5b-a0e7-a44fdfc834ae] Pending
	I0110 01:54:57.785366    4942 system_pods.go:61] "csi-hostpathplugin-669m5" [b755c7db-16c8-4742-8f16-5e9e25f36339] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0110 01:54:57.785375    4942 system_pods.go:61] "etcd-addons-106930" [d9d168a9-020b-4704-b3c3-aaa0de3c1c2f] Running
	I0110 01:54:57.785382    4942 system_pods.go:61] "kindnet-2kd7v" [d7f5f54b-f269-42db-9e4f-25c3f3082112] Running
	I0110 01:54:57.785394    4942 system_pods.go:61] "kube-apiserver-addons-106930" [6dfc1c18-68a9-4dcf-94f8-4328d2753298] Running
	I0110 01:54:57.785403    4942 system_pods.go:61] "kube-controller-manager-addons-106930" [7f076e45-1406-413b-91ea-31bbac913899] Running
	I0110 01:54:57.785408    4942 system_pods.go:61] "kube-ingress-dns-minikube" [ce768ebf-561a-44ba-b6c9-f316f1ceade7] Pending
	I0110 01:54:57.785412    4942 system_pods.go:61] "kube-proxy-fd2c5" [ba68c41c-6f0a-4184-8a65-27032c39536a] Running
	I0110 01:54:57.785418    4942 system_pods.go:61] "kube-scheduler-addons-106930" [3543fed7-51b4-495c-9988-8669b7f16306] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 01:54:57.785423    4942 system_pods.go:61] "metrics-server-5778bb4788-b4gv8" [7d1af253-4564-484c-93ef-eb8b40ae57ef] Pending
	I0110 01:54:57.785428    4942 system_pods.go:61] "nvidia-device-plugin-daemonset-thwrk" [16277925-8e1a-4395-ae02-badbd996a408] Pending
	I0110 01:54:57.785432    4942 system_pods.go:61] "registry-788cd7d5bc-rcs59" [375226f7-84d3-439b-9562-be87a865abbe] Pending
	I0110 01:54:57.785438    4942 system_pods.go:61] "registry-creds-567fb78d95-c697l" [37b47d7d-a8ff-4e35-a63d-a4c68651a0a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0110 01:54:57.785443    4942 system_pods.go:61] "registry-proxy-h9l6w" [7d1bade3-9c33-4933-a888-ed07ffed5bfb] Pending
	I0110 01:54:57.785447    4942 system_pods.go:61] "snapshot-controller-6588d87457-7ck2g" [f9565fba-339d-4ed3-b9db-6c17cbf2a853] Pending
	I0110 01:54:57.785451    4942 system_pods.go:61] "snapshot-controller-6588d87457-gxlfc" [49bc2e64-b404-4387-ae75-20aa477ca91f] Pending
	I0110 01:54:57.785455    4942 system_pods.go:61] "storage-provisioner" [509a9ebd-306a-4f8c-a6f5-0c318964db85] Pending
	I0110 01:54:57.785467    4942 system_pods.go:74] duration metric: took 99.303578ms to wait for pod list to return data ...
	I0110 01:54:57.785479    4942 default_sa.go:34] waiting for default service account to be created ...
	I0110 01:54:57.840129    4942 default_sa.go:45] found service account: "default"
	I0110 01:54:57.840157    4942 default_sa.go:55] duration metric: took 54.671872ms for default service account to be created ...
	I0110 01:54:57.840168    4942 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 01:54:57.854802    4942 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0110 01:54:57.854837    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:57.855405    4942 system_pods.go:86] 19 kube-system pods found
	I0110 01:54:57.855429    4942 system_pods.go:89] "coredns-7d764666f9-84fwv" [c1d3f6bb-a5de-4390-8605-9db4703c28e6] Pending
	I0110 01:54:57.855436    4942 system_pods.go:89] "csi-hostpath-attacher-0" [a5f4327d-3614-4174-9036-70310051c8a2] Pending
	I0110 01:54:57.855450    4942 system_pods.go:89] "csi-hostpath-resizer-0" [1ac31177-3b83-4a5b-a0e7-a44fdfc834ae] Pending
	I0110 01:54:57.855465    4942 system_pods.go:89] "csi-hostpathplugin-669m5" [b755c7db-16c8-4742-8f16-5e9e25f36339] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0110 01:54:57.855470    4942 system_pods.go:89] "etcd-addons-106930" [d9d168a9-020b-4704-b3c3-aaa0de3c1c2f] Running
	I0110 01:54:57.855482    4942 system_pods.go:89] "kindnet-2kd7v" [d7f5f54b-f269-42db-9e4f-25c3f3082112] Running
	I0110 01:54:57.855487    4942 system_pods.go:89] "kube-apiserver-addons-106930" [6dfc1c18-68a9-4dcf-94f8-4328d2753298] Running
	I0110 01:54:57.855492    4942 system_pods.go:89] "kube-controller-manager-addons-106930" [7f076e45-1406-413b-91ea-31bbac913899] Running
	I0110 01:54:57.855502    4942 system_pods.go:89] "kube-ingress-dns-minikube" [ce768ebf-561a-44ba-b6c9-f316f1ceade7] Pending
	I0110 01:54:57.855506    4942 system_pods.go:89] "kube-proxy-fd2c5" [ba68c41c-6f0a-4184-8a65-27032c39536a] Running
	I0110 01:54:57.855520    4942 system_pods.go:89] "kube-scheduler-addons-106930" [3543fed7-51b4-495c-9988-8669b7f16306] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 01:54:57.855532    4942 system_pods.go:89] "metrics-server-5778bb4788-b4gv8" [7d1af253-4564-484c-93ef-eb8b40ae57ef] Pending
	I0110 01:54:57.855537    4942 system_pods.go:89] "nvidia-device-plugin-daemonset-thwrk" [16277925-8e1a-4395-ae02-badbd996a408] Pending
	I0110 01:54:57.855541    4942 system_pods.go:89] "registry-788cd7d5bc-rcs59" [375226f7-84d3-439b-9562-be87a865abbe] Pending
	I0110 01:54:57.855547    4942 system_pods.go:89] "registry-creds-567fb78d95-c697l" [37b47d7d-a8ff-4e35-a63d-a4c68651a0a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0110 01:54:57.855552    4942 system_pods.go:89] "registry-proxy-h9l6w" [7d1bade3-9c33-4933-a888-ed07ffed5bfb] Pending
	I0110 01:54:57.855560    4942 system_pods.go:89] "snapshot-controller-6588d87457-7ck2g" [f9565fba-339d-4ed3-b9db-6c17cbf2a853] Pending
	I0110 01:54:57.855569    4942 system_pods.go:89] "snapshot-controller-6588d87457-gxlfc" [49bc2e64-b404-4387-ae75-20aa477ca91f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 01:54:57.855580    4942 system_pods.go:89] "storage-provisioner" [509a9ebd-306a-4f8c-a6f5-0c318964db85] Pending
	I0110 01:54:57.855608    4942 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0110 01:54:58.057244    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:58.057530    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:58.111027    4942 system_pods.go:86] 19 kube-system pods found
	I0110 01:54:58.111070    4942 system_pods.go:89] "coredns-7d764666f9-84fwv" [c1d3f6bb-a5de-4390-8605-9db4703c28e6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 01:54:58.111080    4942 system_pods.go:89] "csi-hostpath-attacher-0" [a5f4327d-3614-4174-9036-70310051c8a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0110 01:54:58.111089    4942 system_pods.go:89] "csi-hostpath-resizer-0" [1ac31177-3b83-4a5b-a0e7-a44fdfc834ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0110 01:54:58.111097    4942 system_pods.go:89] "csi-hostpathplugin-669m5" [b755c7db-16c8-4742-8f16-5e9e25f36339] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0110 01:54:58.111102    4942 system_pods.go:89] "etcd-addons-106930" [d9d168a9-020b-4704-b3c3-aaa0de3c1c2f] Running
	I0110 01:54:58.111107    4942 system_pods.go:89] "kindnet-2kd7v" [d7f5f54b-f269-42db-9e4f-25c3f3082112] Running
	I0110 01:54:58.111113    4942 system_pods.go:89] "kube-apiserver-addons-106930" [6dfc1c18-68a9-4dcf-94f8-4328d2753298] Running
	I0110 01:54:58.111121    4942 system_pods.go:89] "kube-controller-manager-addons-106930" [7f076e45-1406-413b-91ea-31bbac913899] Running
	I0110 01:54:58.111136    4942 system_pods.go:89] "kube-ingress-dns-minikube" [ce768ebf-561a-44ba-b6c9-f316f1ceade7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0110 01:54:58.111145    4942 system_pods.go:89] "kube-proxy-fd2c5" [ba68c41c-6f0a-4184-8a65-27032c39536a] Running
	I0110 01:54:58.111152    4942 system_pods.go:89] "kube-scheduler-addons-106930" [3543fed7-51b4-495c-9988-8669b7f16306] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 01:54:58.111159    4942 system_pods.go:89] "metrics-server-5778bb4788-b4gv8" [7d1af253-4564-484c-93ef-eb8b40ae57ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0110 01:54:58.111174    4942 system_pods.go:89] "nvidia-device-plugin-daemonset-thwrk" [16277925-8e1a-4395-ae02-badbd996a408] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0110 01:54:58.111187    4942 system_pods.go:89] "registry-788cd7d5bc-rcs59" [375226f7-84d3-439b-9562-be87a865abbe] Pending
	I0110 01:54:58.111198    4942 system_pods.go:89] "registry-creds-567fb78d95-c697l" [37b47d7d-a8ff-4e35-a63d-a4c68651a0a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0110 01:54:58.111209    4942 system_pods.go:89] "registry-proxy-h9l6w" [7d1bade3-9c33-4933-a888-ed07ffed5bfb] Pending
	I0110 01:54:58.111214    4942 system_pods.go:89] "snapshot-controller-6588d87457-7ck2g" [f9565fba-339d-4ed3-b9db-6c17cbf2a853] Pending
	I0110 01:54:58.111223    4942 system_pods.go:89] "snapshot-controller-6588d87457-gxlfc" [49bc2e64-b404-4387-ae75-20aa477ca91f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 01:54:58.111234    4942 system_pods.go:89] "storage-provisioner" [509a9ebd-306a-4f8c-a6f5-0c318964db85] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 01:54:58.209609    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:58.317672    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:58.416783    4942 system_pods.go:86] 19 kube-system pods found
	I0110 01:54:58.416821    4942 system_pods.go:89] "coredns-7d764666f9-84fwv" [c1d3f6bb-a5de-4390-8605-9db4703c28e6] Running
	I0110 01:54:58.416832    4942 system_pods.go:89] "csi-hostpath-attacher-0" [a5f4327d-3614-4174-9036-70310051c8a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0110 01:54:58.416840    4942 system_pods.go:89] "csi-hostpath-resizer-0" [1ac31177-3b83-4a5b-a0e7-a44fdfc834ae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0110 01:54:58.416856    4942 system_pods.go:89] "csi-hostpathplugin-669m5" [b755c7db-16c8-4742-8f16-5e9e25f36339] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0110 01:54:58.416867    4942 system_pods.go:89] "etcd-addons-106930" [d9d168a9-020b-4704-b3c3-aaa0de3c1c2f] Running
	I0110 01:54:58.416873    4942 system_pods.go:89] "kindnet-2kd7v" [d7f5f54b-f269-42db-9e4f-25c3f3082112] Running
	I0110 01:54:58.416878    4942 system_pods.go:89] "kube-apiserver-addons-106930" [6dfc1c18-68a9-4dcf-94f8-4328d2753298] Running
	I0110 01:54:58.416895    4942 system_pods.go:89] "kube-controller-manager-addons-106930" [7f076e45-1406-413b-91ea-31bbac913899] Running
	I0110 01:54:58.416903    4942 system_pods.go:89] "kube-ingress-dns-minikube" [ce768ebf-561a-44ba-b6c9-f316f1ceade7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0110 01:54:58.416913    4942 system_pods.go:89] "kube-proxy-fd2c5" [ba68c41c-6f0a-4184-8a65-27032c39536a] Running
	I0110 01:54:58.416920    4942 system_pods.go:89] "kube-scheduler-addons-106930" [3543fed7-51b4-495c-9988-8669b7f16306] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 01:54:58.416927    4942 system_pods.go:89] "metrics-server-5778bb4788-b4gv8" [7d1af253-4564-484c-93ef-eb8b40ae57ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0110 01:54:58.416935    4942 system_pods.go:89] "nvidia-device-plugin-daemonset-thwrk" [16277925-8e1a-4395-ae02-badbd996a408] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0110 01:54:58.416947    4942 system_pods.go:89] "registry-788cd7d5bc-rcs59" [375226f7-84d3-439b-9562-be87a865abbe] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0110 01:54:58.416954    4942 system_pods.go:89] "registry-creds-567fb78d95-c697l" [37b47d7d-a8ff-4e35-a63d-a4c68651a0a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0110 01:54:58.416972    4942 system_pods.go:89] "registry-proxy-h9l6w" [7d1bade3-9c33-4933-a888-ed07ffed5bfb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0110 01:54:58.416979    4942 system_pods.go:89] "snapshot-controller-6588d87457-7ck2g" [f9565fba-339d-4ed3-b9db-6c17cbf2a853] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 01:54:58.416990    4942 system_pods.go:89] "snapshot-controller-6588d87457-gxlfc" [49bc2e64-b404-4387-ae75-20aa477ca91f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 01:54:58.416997    4942 system_pods.go:89] "storage-provisioner" [509a9ebd-306a-4f8c-a6f5-0c318964db85] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 01:54:58.417007    4942 system_pods.go:126] duration metric: took 576.834212ms to wait for k8s-apps to be running ...
	I0110 01:54:58.417018    4942 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 01:54:58.417080    4942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 01:54:58.447035    4942 system_svc.go:56] duration metric: took 30.008226ms WaitForService to wait for kubelet
	I0110 01:54:58.447076    4942 kubeadm.go:587] duration metric: took 15.10546814s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 01:54:58.447097    4942 node_conditions.go:102] verifying NodePressure condition ...
	I0110 01:54:58.456599    4942 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 01:54:58.456630    4942 node_conditions.go:123] node cpu capacity is 2
	I0110 01:54:58.456644    4942 node_conditions.go:105] duration metric: took 9.541712ms to run NodePressure ...
	I0110 01:54:58.456666    4942 start.go:242] waiting for startup goroutines ...
	I0110 01:54:58.552626    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:58.552980    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:58.704744    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:58.800399    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:59.048442    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:59.049232    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:59.204348    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:59.305754    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:54:59.551488    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:54:59.551838    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:54:59.704063    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:54:59.800136    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:00.095913    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:00.096295    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:00.210228    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:00.323373    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:00.551484    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:00.552380    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:00.705616    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:00.800352    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:01.051098    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:01.051639    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:01.204977    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:01.300567    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:01.549345    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:01.551707    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:01.704014    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:01.800275    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:02.049015    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:02.051572    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:02.206977    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:02.322846    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:02.552099    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:02.552517    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:02.705272    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:02.801487    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:03.054009    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:03.054453    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:03.203877    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:03.308856    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:03.553201    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:03.553638    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:03.704649    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:03.804212    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:04.059100    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:04.064223    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:04.208491    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:04.304949    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:04.550737    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:04.551074    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:04.705066    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:04.800407    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:05.051415    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:05.051708    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:05.207001    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:05.300535    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:05.551159    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:05.551422    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:05.704232    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:05.799614    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:06.052861    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:06.053188    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:06.203881    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:06.299578    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:06.548354    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:06.550013    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:06.704553    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:06.799422    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:07.050386    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:07.050784    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:07.203666    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:07.300214    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:07.549223    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:07.549663    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:07.707232    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:07.808260    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:08.048671    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:08.051789    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:08.204011    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:08.300353    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:08.550558    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:08.550964    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:08.704610    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:08.806186    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:09.047951    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:09.049841    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:09.203761    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:09.305622    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:09.549240    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:09.550545    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:09.704304    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:09.800380    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:10.054190    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:10.054839    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:10.204649    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:10.302527    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:10.551073    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:10.552177    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:10.705410    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:10.800560    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:11.053451    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:11.055475    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:11.204629    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:11.300824    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:11.551681    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:11.552145    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:11.704143    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:11.801187    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:12.050934    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:12.051387    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:12.204276    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:12.299593    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:12.557125    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:12.557481    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:12.704138    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:12.800621    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:13.053065    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:13.054406    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:13.204388    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:13.301811    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:13.551042    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:13.551474    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:13.704334    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:13.799850    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:14.051138    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:14.051592    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:14.204655    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:14.299458    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:14.548838    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:14.551322    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:14.704521    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:14.799764    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:15.049612    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:15.049612    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:15.204532    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:15.304000    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:15.549510    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:15.550818    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:15.705355    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:15.800323    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:16.050447    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:16.051388    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:16.204627    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:16.301296    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:16.549737    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:16.550275    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:16.705154    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:16.800002    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:17.049671    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:17.051042    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:17.204465    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:17.300328    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:17.560109    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:17.560442    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:17.704637    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:17.800441    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:18.050319    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:18.050479    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:18.210821    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:18.300414    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:18.551230    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:18.552722    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:18.705379    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:18.802436    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:19.052648    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:19.052948    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:19.206451    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:19.299665    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:19.548522    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:19.550358    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:19.705968    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:19.802377    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:20.049243    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:20.050254    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:20.203996    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:20.299650    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:20.548446    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:20.550355    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:20.704337    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:20.799419    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:21.049362    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:21.050651    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:21.203738    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:21.300492    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:21.549405    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:21.549829    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:21.704212    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:21.799240    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:22.048419    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:22.050510    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:22.203685    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:22.299594    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:22.549076    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:22.549996    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:22.704402    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:22.799744    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:23.048501    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:23.050791    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:23.203618    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:23.299627    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:23.553558    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:23.555754    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:23.704923    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:23.805755    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:24.056296    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:24.056513    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:24.204205    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:24.302164    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:24.550615    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:24.551131    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:24.707240    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:24.807278    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:25.049406    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:25.051379    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:25.204484    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:25.299981    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:25.548862    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:25.551555    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:25.703894    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:25.800170    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:26.049996    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:26.050255    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:26.205778    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:26.306962    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:26.549220    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:26.550424    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:26.704477    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:26.805083    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:27.049838    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:27.051275    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:27.204174    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:27.299857    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:27.550969    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:27.551238    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:27.703759    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:27.800235    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:28.050764    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:28.050935    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:28.204367    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:28.300313    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:28.550494    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:28.550973    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:28.704042    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:28.801090    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:29.058549    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:29.058704    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:29.210644    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:29.302607    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:29.550882    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:29.551170    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:29.703734    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:29.800023    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:30.051311    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:30.051508    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:30.204567    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:30.300643    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:30.549572    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:30.550856    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:30.703952    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:30.800708    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:31.048464    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:31.050419    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:31.205225    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:31.305659    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:31.550496    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:31.550669    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:31.703439    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:31.799967    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:32.051009    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:32.051424    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:32.204402    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:32.299366    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:32.549956    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:32.550121    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:32.704392    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:32.805043    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:33.049563    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:33.050705    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:33.203594    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:33.300586    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:33.550618    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:33.553123    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:33.704729    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:33.800467    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:34.054636    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:34.055008    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:34.203685    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:34.300135    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:34.549835    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:34.549982    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:34.703851    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:34.800708    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:35.050117    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:35.053089    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:35.203838    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:35.299066    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:35.550351    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:35.550780    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:35.703674    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:35.800517    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:36.050317    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:36.051429    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:36.204473    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:36.300351    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:36.549944    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:36.550518    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:36.705228    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:36.805841    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:37.051447    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:37.052350    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 01:55:37.204759    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:37.300906    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:37.548445    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:37.550121    4942 kapi.go:107] duration metric: took 47.00339934s to wait for kubernetes.io/minikube-addons=registry ...
	I0110 01:55:37.703603    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:37.800966    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:38.049322    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:38.204548    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:38.300016    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:38.549185    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:38.704309    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:38.800284    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:39.048539    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:39.204258    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:39.300517    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:39.549474    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:39.703388    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:39.799506    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:40.049973    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:40.205614    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:40.303364    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:40.548394    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:40.723474    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:40.810187    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:41.052510    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:41.203652    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:41.300055    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:41.549292    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:41.705013    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:41.801157    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:42.055613    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:42.205729    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:42.300987    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:42.549700    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:42.703602    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:42.799965    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:43.048377    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:43.204838    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:43.300784    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:43.549220    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:43.704429    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:43.799700    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:44.049196    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:44.204051    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:44.304417    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:44.548690    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:44.703609    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:44.804757    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:45.060902    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:45.212300    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 01:55:45.301088    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:45.551949    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:45.704495    4942 kapi.go:107] duration metric: took 53.503907404s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0110 01:55:45.707581    4942 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-106930 cluster.
	I0110 01:55:45.710469    4942 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0110 01:55:45.713383    4942 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0110 01:55:45.799223    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:46.048759    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:46.300811    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:46.549745    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:46.813266    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:47.048840    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:47.301035    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:47.548405    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:47.799822    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:48.052278    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:48.299647    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:48.548862    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:48.805879    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:49.052445    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:49.302677    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:49.548469    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:49.799764    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:50.049249    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:50.300531    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:50.548359    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:50.803220    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:51.050499    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:51.309214    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:51.553409    4942 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 01:55:51.801206    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:52.048512    4942 kapi.go:107] duration metric: took 1m1.50333998s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0110 01:55:52.299378    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:52.800470    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:53.300242    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:53.800554    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:54.301001    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:54.813302    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:55.300603    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:55.802239    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:56.301126    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:56.801112    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:57.302729    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:57.800329    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:58.300099    4942 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 01:55:58.799782    4942 kapi.go:107] duration metric: took 1m8.003570409s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0110 01:55:58.802925    4942 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, cloud-spanner, ingress-dns, inspektor-gadget, storage-provisioner, registry-creds, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0110 01:55:58.805731    4942 addons.go:530] duration metric: took 1m15.463687191s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin cloud-spanner ingress-dns inspektor-gadget storage-provisioner registry-creds metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0110 01:55:58.805783    4942 start.go:247] waiting for cluster config update ...
	I0110 01:55:58.805837    4942 start.go:256] writing updated cluster config ...
	I0110 01:55:58.806145    4942 ssh_runner.go:195] Run: rm -f paused
	I0110 01:55:58.809689    4942 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 01:55:58.812884    4942 pod_ready.go:83] waiting for pod "coredns-7d764666f9-84fwv" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:55:58.817236    4942 pod_ready.go:94] pod "coredns-7d764666f9-84fwv" is "Ready"
	I0110 01:55:58.817265    4942 pod_ready.go:86] duration metric: took 4.354548ms for pod "coredns-7d764666f9-84fwv" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:55:58.819331    4942 pod_ready.go:83] waiting for pod "etcd-addons-106930" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:55:58.823442    4942 pod_ready.go:94] pod "etcd-addons-106930" is "Ready"
	I0110 01:55:58.823468    4942 pod_ready.go:86] duration metric: took 4.111716ms for pod "etcd-addons-106930" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:55:58.825705    4942 pod_ready.go:83] waiting for pod "kube-apiserver-addons-106930" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:55:58.829880    4942 pod_ready.go:94] pod "kube-apiserver-addons-106930" is "Ready"
	I0110 01:55:58.829904    4942 pod_ready.go:86] duration metric: took 4.177175ms for pod "kube-apiserver-addons-106930" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:55:58.832069    4942 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-106930" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:55:59.214134    4942 pod_ready.go:94] pod "kube-controller-manager-addons-106930" is "Ready"
	I0110 01:55:59.214157    4942 pod_ready.go:86] duration metric: took 382.063255ms for pod "kube-controller-manager-addons-106930" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:55:59.413389    4942 pod_ready.go:83] waiting for pod "kube-proxy-fd2c5" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:55:59.813436    4942 pod_ready.go:94] pod "kube-proxy-fd2c5" is "Ready"
	I0110 01:55:59.813462    4942 pod_ready.go:86] duration metric: took 400.046798ms for pod "kube-proxy-fd2c5" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:56:00.035712    4942 pod_ready.go:83] waiting for pod "kube-scheduler-addons-106930" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:56:00.415631    4942 pod_ready.go:94] pod "kube-scheduler-addons-106930" is "Ready"
	I0110 01:56:00.415662    4942 pod_ready.go:86] duration metric: took 379.917658ms for pod "kube-scheduler-addons-106930" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 01:56:00.415677    4942 pod_ready.go:40] duration metric: took 1.605957825s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 01:56:00.486571    4942 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 01:56:00.489707    4942 out.go:203] 
	W0110 01:56:00.492757    4942 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 01:56:00.495508    4942 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 01:56:00.498379    4942 out.go:179] * Done! kubectl is now configured to use "addons-106930" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 01:56:29 addons-106930 crio[831]: time="2026-01-10T01:56:29.611108838Z" level=info msg="Started container" PID=5496 containerID=6bae82a5122099b664f3ddb99faf30fa78fdc8ba520cf429fb7ae203a5ed0b59 description=default/test-local-path/busybox id=c63f8020-6c06-4dd0-84a4-dbf09e28b9e6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bab1951e681d7f52ede01cc05ea449fcba7019c805c1f5c2a12ce0e8cdfb8ed9
	Jan 10 01:56:30 addons-106930 crio[831]: time="2026-01-10T01:56:30.759118263Z" level=info msg="Stopping pod sandbox: bab1951e681d7f52ede01cc05ea449fcba7019c805c1f5c2a12ce0e8cdfb8ed9" id=172d35ba-7fd7-4c4d-86fd-ec7a51c931e3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 10 01:56:30 addons-106930 crio[831]: time="2026-01-10T01:56:30.759399198Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:bab1951e681d7f52ede01cc05ea449fcba7019c805c1f5c2a12ce0e8cdfb8ed9 UID:bfe1e91c-7423-46fc-823b-addfb55e70a9 NetNS:/var/run/netns/3a045416-4d5e-4e4b-ba21-6d8321b740a6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002830458}] Aliases:map[]}"
	Jan 10 01:56:30 addons-106930 crio[831]: time="2026-01-10T01:56:30.759544523Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Jan 10 01:56:30 addons-106930 crio[831]: time="2026-01-10T01:56:30.794225527Z" level=info msg="Stopped pod sandbox: bab1951e681d7f52ede01cc05ea449fcba7019c805c1f5c2a12ce0e8cdfb8ed9" id=172d35ba-7fd7-4c4d-86fd-ec7a51c931e3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 10 01:56:32 addons-106930 crio[831]: time="2026-01-10T01:56:32.576352901Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e/POD" id=286af20b-e399-4847-8b9a-321da5d5cb14 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 01:56:32 addons-106930 crio[831]: time="2026-01-10T01:56:32.576520338Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 01:56:32 addons-106930 crio[831]: time="2026-01-10T01:56:32.598332543Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e Namespace:local-path-storage ID:508980ec65dcf6b240f145917d6a1789edc7e79f74d4d0fe107bfa725db9030f UID:f0794e14-cf54-43d6-9596-8539931b4191 NetNS:/var/run/netns/77041ad2-726d-4963-8347-c8b20e191a46 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001326868}] Aliases:map[]}"
	Jan 10 01:56:32 addons-106930 crio[831]: time="2026-01-10T01:56:32.598582842Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e to CNI network \"kindnet\" (type=ptp)"
	Jan 10 01:56:32 addons-106930 crio[831]: time="2026-01-10T01:56:32.630939433Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e Namespace:local-path-storage ID:508980ec65dcf6b240f145917d6a1789edc7e79f74d4d0fe107bfa725db9030f UID:f0794e14-cf54-43d6-9596-8539931b4191 NetNS:/var/run/netns/77041ad2-726d-4963-8347-c8b20e191a46 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001326868}] Aliases:map[]}"
	Jan 10 01:56:32 addons-106930 crio[831]: time="2026-01-10T01:56:32.631301539Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e for CNI network kindnet (type=ptp)"
	Jan 10 01:56:32 addons-106930 crio[831]: time="2026-01-10T01:56:32.642268699Z" level=info msg="Ran pod sandbox 508980ec65dcf6b240f145917d6a1789edc7e79f74d4d0fe107bfa725db9030f with infra container: local-path-storage/helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e/POD" id=286af20b-e399-4847-8b9a-321da5d5cb14 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 01:56:32 addons-106930 crio[831]: time="2026-01-10T01:56:32.644083596Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=f50d33f7-b0bf-4e94-9906-4174a810c7e8 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 01:56:32 addons-106930 crio[831]: time="2026-01-10T01:56:32.653122491Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=0ce7b836-44a8-4339-8f9a-2f227bf3e5b5 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 01:56:32 addons-106930 crio[831]: time="2026-01-10T01:56:32.66077014Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e/helper-pod" id=162e46f0-05b4-4e5c-b355-505a697e06d1 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 01:56:32 addons-106930 crio[831]: time="2026-01-10T01:56:32.661035724Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 01:56:32 addons-106930 crio[831]: time="2026-01-10T01:56:32.66993493Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 01:56:32 addons-106930 crio[831]: time="2026-01-10T01:56:32.67044516Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 01:56:32 addons-106930 crio[831]: time="2026-01-10T01:56:32.692446734Z" level=info msg="Created container b607f363b33d09922ea2c9327367f554377b286a3f83703c890ac4bb464f960e: local-path-storage/helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e/helper-pod" id=162e46f0-05b4-4e5c-b355-505a697e06d1 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 01:56:32 addons-106930 crio[831]: time="2026-01-10T01:56:32.69678288Z" level=info msg="Starting container: b607f363b33d09922ea2c9327367f554377b286a3f83703c890ac4bb464f960e" id=741f9e49-cd34-47f1-93b2-727021992820 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 01:56:32 addons-106930 crio[831]: time="2026-01-10T01:56:32.702810571Z" level=info msg="Started container" PID=5619 containerID=b607f363b33d09922ea2c9327367f554377b286a3f83703c890ac4bb464f960e description=local-path-storage/helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e/helper-pod id=741f9e49-cd34-47f1-93b2-727021992820 name=/runtime.v1.RuntimeService/StartContainer sandboxID=508980ec65dcf6b240f145917d6a1789edc7e79f74d4d0fe107bfa725db9030f
	Jan 10 01:56:33 addons-106930 crio[831]: time="2026-01-10T01:56:33.775562606Z" level=info msg="Stopping pod sandbox: 508980ec65dcf6b240f145917d6a1789edc7e79f74d4d0fe107bfa725db9030f" id=56915d1c-3752-442c-bb0b-15010dce66f4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jan 10 01:56:33 addons-106930 crio[831]: time="2026-01-10T01:56:33.775947834Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e Namespace:local-path-storage ID:508980ec65dcf6b240f145917d6a1789edc7e79f74d4d0fe107bfa725db9030f UID:f0794e14-cf54-43d6-9596-8539931b4191 NetNS:/var/run/netns/77041ad2-726d-4963-8347-c8b20e191a46 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001327070}] Aliases:map[]}"
	Jan 10 01:56:33 addons-106930 crio[831]: time="2026-01-10T01:56:33.776565728Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e from CNI network \"kindnet\" (type=ptp)"
	Jan 10 01:56:33 addons-106930 crio[831]: time="2026-01-10T01:56:33.807259615Z" level=info msg="Stopped pod sandbox: 508980ec65dcf6b240f145917d6a1789edc7e79f74d4d0fe107bfa725db9030f" id=56915d1c-3752-442c-bb0b-15010dce66f4 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	b607f363b33d0       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   508980ec65dcf       helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e   local-path-storage
	6bae82a512209       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            4 seconds ago        Exited              busybox                                  0                   bab1951e681d7       test-local-path                                              default
	95581e18928a5       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            9 seconds ago        Exited              helper-pod                               0                   3ecf558611bb8       helper-pod-create-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e   local-path-storage
	f3ab2f6adae5d       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          9 seconds ago        Exited              registry-test                            0                   0eec1a4dd095a       registry-test                                                default
	1470c6dde9fd3       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          29 seconds ago       Running             busybox                                  0                   4a148bf3c4258       busybox                                                      default
	5dafbb5913003       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          36 seconds ago       Running             csi-snapshotter                          0                   0ff8b9ad193a7       csi-hostpathplugin-669m5                                     kube-system
	54899c00ceb58       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          37 seconds ago       Running             csi-provisioner                          0                   0ff8b9ad193a7       csi-hostpathplugin-669m5                                     kube-system
	6b948bf13d65c       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            39 seconds ago       Running             liveness-probe                           0                   0ff8b9ad193a7       csi-hostpathplugin-669m5                                     kube-system
	7eebd1b695c51       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           40 seconds ago       Running             hostpath                                 0                   0ff8b9ad193a7       csi-hostpathplugin-669m5                                     kube-system
	d95a18b705959       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                41 seconds ago       Running             node-driver-registrar                    0                   0ff8b9ad193a7       csi-hostpathplugin-669m5                                     kube-system
	68dd4e1e49302       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             43 seconds ago       Running             controller                               0                   50b7d2475a3bd       ingress-nginx-controller-7847b5c79c-kxj9z                    ingress-nginx
	d48f4a8559f4f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 49 seconds ago       Running             gcp-auth                                 0                   9a3fcaef4942a       gcp-auth-5bbcf684b5-drh8s                                    gcp-auth
	e6d771ff2fbaf       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            52 seconds ago       Running             gadget                                   0                   434c617058ac9       gadget-fzvff                                                 gadget
	62779c2e8b2f9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   56 seconds ago       Exited              patch                                    1                   9f74a205b2bcf       gcp-auth-certs-patch-px4fc                                   gcp-auth
	a458547631bdc       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              56 seconds ago       Running             registry-proxy                           0                   50c71a8af5bde       registry-proxy-h9l6w                                         kube-system
	abd9f763f3604       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   09305da0a348f       csi-hostpath-attacher-0                                      kube-system
	e27ae252f2778       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   0ff8b9ad193a7       csi-hostpathplugin-669m5                                     kube-system
	a9693cc77cf0c       nvcr.io/nvidia/k8s-device-plugin@sha256:10b7b747520ba2314061b5b319d3b2766b9cec1fd9404109c607e85b30af6905                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   e0df523450491       nvidia-device-plugin-daemonset-thwrk                         kube-system
	2c8af457d0f78       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   c46567aae7c39       snapshot-controller-6588d87457-7ck2g                         kube-system
	dbe5d7a996f35       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   a8fa34cf74b72       snapshot-controller-6588d87457-gxlfc                         kube-system
	4b4e34aaa6e53       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   f26e10f5b7559       local-path-provisioner-c44bcd496-52h24                       local-path-storage
	a23567b0b791f       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   889bbb0ad1075       registry-788cd7d5bc-rcs59                                    kube-system
	d1580471fb330       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              patch                                    0                   485da448429d8       ingress-nginx-admission-patch-2chnn                          ingress-nginx
	4ec4c76353665       ghcr.io/manusa/yakd@sha256:68bfcea671292190cdd2b127455726ac24794d1f7c55ce74c33d4648a3a0f50b                                                  About a minute ago   Running             yakd                                     0                   ce4280ef7c71f       yakd-dashboard-7bcf5795cd-tp82j                              yakd-dashboard
	8bce067de04e9       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   edd5072342102       kube-ingress-dns-minikube                                    kube-system
	1a05e0992639e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              create                                   0                   010ccda124a85       gcp-auth-certs-create-zzq7r                                  gcp-auth
	245c82556c091       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   1aa27671c8b46       csi-hostpath-resizer-0                                       kube-system
	0b80e01f9d69e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   About a minute ago   Exited              create                                   0                   e468353db7b61       ingress-nginx-admission-create-7ct92                         ingress-nginx
	b5d86ed114fd4       gcr.io/cloud-spanner-emulator/emulator@sha256:084e511546640743b2d25fe2ee59800bc7ec910acfc12175bad2270f159f5eba                               About a minute ago   Running             cloud-spanner-emulator                   0                   1f8e97673437b       cloud-spanner-emulator-5649ccbc87-zbt8q                      default
	b6868c2a5d4ce       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   7c570074d36bc       metrics-server-5778bb4788-b4gv8                              kube-system
	3e0c2863ef6c2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   5c67eb29d3a29       storage-provisioner                                          kube-system
	27448255d1e1c       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                                                             About a minute ago   Running             coredns                                  0                   037b66443fe48       coredns-7d764666f9-84fwv                                     kube-system
	6f32539141278       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3                                           About a minute ago   Running             kindnet-cni                              0                   2382cf1de7191       kindnet-2kd7v                                                kube-system
	5a226148b0ac7       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                                                             About a minute ago   Running             kube-proxy                               0                   8030fd81fb308       kube-proxy-fd2c5                                             kube-system
	014b6f8800d7d       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                                                             2 minutes ago        Running             etcd                                     0                   602716f2785b9       etcd-addons-106930                                           kube-system
	c951c20416799       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                                                             2 minutes ago        Running             kube-scheduler                           0                   230aec1e37dbc       kube-scheduler-addons-106930                                 kube-system
	1c96ddd0e76bb       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                                                             2 minutes ago        Running             kube-controller-manager                  0                   788d6fdae526c       kube-controller-manager-addons-106930                        kube-system
	80a58bbf677ab       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                                                             2 minutes ago        Running             kube-apiserver                           0                   115963e1a134c       kube-apiserver-addons-106930                                 kube-system
	
	
	==> coredns [27448255d1e1c54f8723ee7cca1526b639c1aebd34bdb2d75396a1ac1817885e] <==
	[INFO] 10.244.0.12:41975 - 64712 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001801703s
	[INFO] 10.244.0.12:41975 - 39931 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000107222s
	[INFO] 10.244.0.12:41975 - 19959 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000143988s
	[INFO] 10.244.0.12:38234 - 41793 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000305s
	[INFO] 10.244.0.12:38234 - 41572 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000180794s
	[INFO] 10.244.0.12:60248 - 16185 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00012222s
	[INFO] 10.244.0.12:60248 - 15947 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00016642s
	[INFO] 10.244.0.12:50676 - 44124 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00010467s
	[INFO] 10.244.0.12:50676 - 43927 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000142093s
	[INFO] 10.244.0.12:41726 - 35471 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001471514s
	[INFO] 10.244.0.12:41726 - 35274 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001431121s
	[INFO] 10.244.0.12:51681 - 7273 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000176331s
	[INFO] 10.244.0.12:51681 - 7076 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000224814s
	[INFO] 10.244.0.20:47957 - 40347 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000093126s
	[INFO] 10.244.0.20:53354 - 35324 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00039038s
	[INFO] 10.244.0.20:40512 - 52046 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000156656s
	[INFO] 10.244.0.20:53249 - 25220 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100461s
	[INFO] 10.244.0.20:50080 - 36961 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124484s
	[INFO] 10.244.0.20:50473 - 59949 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000083706s
	[INFO] 10.244.0.20:59074 - 19638 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001589115s
	[INFO] 10.244.0.20:40903 - 10406 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002189425s
	[INFO] 10.244.0.20:56152 - 30639 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000776511s
	[INFO] 10.244.0.20:55390 - 53229 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001850826s
	[INFO] 10.244.0.22:43475 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000184011s
	[INFO] 10.244.0.22:32947 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000178194s
	
	
	==> describe nodes <==
	Name:               addons-106930
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-106930
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=addons-106930
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T01_54_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-106930
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-106930"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 01:54:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-106930
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 01:56:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 01:56:10 +0000   Sat, 10 Jan 2026 01:54:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 01:56:10 +0000   Sat, 10 Jan 2026 01:54:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 01:56:10 +0000   Sat, 10 Jan 2026 01:54:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 01:56:10 +0000   Sat, 10 Jan 2026 01:54:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-106930
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                78ea6fba-1dd3-4a76-bd58-5b09aefc1d7d
	  Boot ID:                    41f52236-76b9-4dc8-bfe3-121fbdeb9659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  default                     cloud-spanner-emulator-5649ccbc87-zbt8q      0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  gadget                      gadget-fzvff                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  gcp-auth                    gcp-auth-5bbcf684b5-drh8s                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  ingress-nginx               ingress-nginx-controller-7847b5c79c-kxj9z    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         104s
	  kube-system                 coredns-7d764666f9-84fwv                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 csi-hostpathplugin-669m5                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 etcd-addons-106930                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-2kd7v                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-addons-106930                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-addons-106930        200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-proxy-fd2c5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-addons-106930                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 metrics-server-5778bb4788-b4gv8              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         105s
	  kube-system                 nvidia-device-plugin-daemonset-thwrk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 registry-788cd7d5bc-rcs59                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 registry-creds-567fb78d95-c697l              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 registry-proxy-h9l6w                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 snapshot-controller-6588d87457-7ck2g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 snapshot-controller-6588d87457-gxlfc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  local-path-storage          local-path-provisioner-c44bcd496-52h24       0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  yakd-dashboard              yakd-dashboard-7bcf5795cd-tp82j              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     105s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  112s  node-controller  Node addons-106930 event: Registered Node addons-106930 in Controller
	
	
	==> dmesg <==
	[Jan10 01:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014084] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.508014] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034730] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.744874] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.308579] kauditd_printk_skb: 36 callbacks suppressed
	[Jan10 01:54] overlayfs: idmapped layers are currently not supported
	[  +0.058006] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [014b6f8800d7d2127c3f3e57c97f5ed5d5a2361c80253dc74bd9f42bf604e9c2] <==
	{"level":"info","ts":"2026-01-10T01:54:32.624064Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T01:54:32.861967Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T01:54:32.862068Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T01:54:32.862138Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2026-01-10T01:54:32.862191Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T01:54:32.862236Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T01:54:32.865706Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2026-01-10T01:54:32.865811Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T01:54:32.865855Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2026-01-10T01:54:32.865890Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2026-01-10T01:54:32.867190Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T01:54:32.868409Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-106930 ClientURLs:[https://192.168.49.2:2379]}","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T01:54:32.868569Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T01:54:32.868753Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T01:54:32.868914Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T01:54:32.868955Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T01:54:32.871892Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T01:54:32.872026Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T01:54:32.872093Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T01:54:32.887082Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T01:54:32.887251Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T01:54:32.887726Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T01:54:32.916492Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T01:54:32.929725Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2026-01-10T01:54:32.936564Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [d48f4a8559f4f06a3ceff12a82733a1cf7930f8aa87fa4b3124220de32f06b70] <==
	2026/01/10 01:55:44 GCP Auth Webhook started!
	2026/01/10 01:56:01 Ready to marshal response ...
	2026/01/10 01:56:01 Ready to write response ...
	2026/01/10 01:56:01 Ready to marshal response ...
	2026/01/10 01:56:01 Ready to write response ...
	2026/01/10 01:56:01 Ready to marshal response ...
	2026/01/10 01:56:01 Ready to write response ...
	2026/01/10 01:56:21 Ready to marshal response ...
	2026/01/10 01:56:21 Ready to write response ...
	2026/01/10 01:56:23 Ready to marshal response ...
	2026/01/10 01:56:23 Ready to write response ...
	2026/01/10 01:56:23 Ready to marshal response ...
	2026/01/10 01:56:23 Ready to write response ...
	2026/01/10 01:56:32 Ready to marshal response ...
	2026/01/10 01:56:32 Ready to write response ...
	
	
	==> kernel <==
	 01:56:34 up 39 min,  0 user,  load average: 2.69, 1.39, 0.55
	Linux addons-106930 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6f32539141278af9c24103ee0c0a5e69ce237dbbe3695b92b0c5bf3956b30002] <==
	I0110 01:54:46.946084       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 01:54:47.146354       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 01:54:47.146382       1 metrics.go:72] Registering metrics
	I0110 01:54:47.146452       1 controller.go:711] "Syncing nftables rules"
	E0110 01:54:47.146789       1 controller.go:417] "reading nfqueue stats" err="open /proc/net/netfilter/nfnetlink_queue: no such file or directory"
	I0110 01:54:56.944865       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 01:54:56.944919       1 main.go:301] handling current node
	I0110 01:55:06.944918       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 01:55:06.944972       1 main.go:301] handling current node
	I0110 01:55:16.945450       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 01:55:16.945518       1 main.go:301] handling current node
	I0110 01:55:26.945330       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 01:55:26.945435       1 main.go:301] handling current node
	I0110 01:55:36.945438       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 01:55:36.945490       1 main.go:301] handling current node
	I0110 01:55:46.944824       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 01:55:46.944851       1 main.go:301] handling current node
	I0110 01:55:56.945210       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 01:55:56.945259       1 main.go:301] handling current node
	I0110 01:56:06.945672       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 01:56:06.945713       1 main.go:301] handling current node
	I0110 01:56:16.947862       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 01:56:16.947902       1 main.go:301] handling current node
	I0110 01:56:26.944936       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 01:56:26.945059       1 main.go:301] handling current node
	
	
	==> kube-apiserver [80a58bbf677ab4ba0d96a2e9411138aa345db6101e0e1af2e9ea8754ffd39109] <==
	I0110 01:54:50.677510       1 controller.go:667] quota admission added evaluator for: statefulsets.apps
	I0110 01:54:50.750941       1 alloc.go:329] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.102.227.241"}
	W0110 01:54:51.133540       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0110 01:54:51.150889       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I0110 01:54:52.080402       1 alloc.go:329] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.97.9.98"}
	W0110 01:54:57.348089       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.9.98:443: connect: connection refused
	E0110 01:54:57.348515       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.9.98:443: connect: connection refused" logger="UnhandledError"
	W0110 01:54:57.350529       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.9.98:443: connect: connection refused
	E0110 01:54:57.350559       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.9.98:443: connect: connection refused" logger="UnhandledError"
	W0110 01:54:57.409561       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.9.98:443: connect: connection refused
	E0110 01:54:57.409678       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.9.98:443: connect: connection refused" logger="UnhandledError"
	W0110 01:55:12.052683       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0110 01:55:12.082302       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0110 01:55:12.156781       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0110 01:55:12.176787       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E0110 01:55:14.361209       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.139.112:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.139.112:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.139.112:443: connect: connection refused" logger="UnhandledError"
	W0110 01:55:14.365443       1 handler_proxy.go:99] no RequestInfo found in the context
	E0110 01:55:14.365515       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0110 01:55:14.454503       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0110 01:55:14.459968       1 handler.go:304] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0110 01:56:10.342284       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38754: use of closed network connection
	E0110 01:56:10.464012       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38784: use of closed network connection
	
	
	==> kube-controller-manager [1c96ddd0e76bb0b8b444708102feeda0babec3472e7b33767118eb64739b9136] <==
	I0110 01:54:42.014955       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:42.014991       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:42.014999       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:42.015075       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:42.015081       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:42.015089       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:42.015098       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:42.015104       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:42.029416       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:42.034815       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:42.035718       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 01:54:42.045303       1 range_allocator.go:433] "Set node PodCIDR" node="addons-106930" podCIDRs=["10.244.0.0/24"]
	I0110 01:54:42.114202       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:42.114234       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 01:54:42.114241       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 01:54:42.136155       1 shared_informer.go:377] "Caches are synced"
	E0110 01:54:49.233434       1 replica_set.go:592] "Unhandled Error" err="sync \"kube-system/metrics-server-5778bb4788\" failed with pods \"metrics-server-5778bb4788-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I0110 01:55:02.025080       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	E0110 01:55:12.040099       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0110 01:55:12.040269       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0110 01:55:12.040333       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 01:55:12.141206       1 shared_informer.go:377] "Caches are synced"
	I0110 01:55:12.144598       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0110 01:55:12.149009       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 01:55:12.249543       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [5a226148b0ac79fc9d466c6da91dac92efac9abb8d69f4c01f8ce6b764e6b7bc] <==
	I0110 01:54:43.559753       1 server_linux.go:53] "Using iptables proxy"
	I0110 01:54:43.649746       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 01:54:43.749987       1 shared_informer.go:377] "Caches are synced"
	I0110 01:54:43.750050       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0110 01:54:43.750174       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 01:54:44.188554       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 01:54:44.188607       1 server_linux.go:136] "Using iptables Proxier"
	I0110 01:54:44.199193       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 01:54:44.208239       1 server.go:529] "Version info" version="v1.35.0"
	I0110 01:54:44.208267       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 01:54:44.209785       1 config.go:200] "Starting service config controller"
	I0110 01:54:44.209795       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 01:54:44.209811       1 config.go:106] "Starting endpoint slice config controller"
	I0110 01:54:44.209815       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 01:54:44.209824       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 01:54:44.209828       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 01:54:44.224125       1 config.go:309] "Starting node config controller"
	I0110 01:54:44.224143       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 01:54:44.224151       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 01:54:44.310765       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 01:54:44.310794       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 01:54:44.310817       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c951c20416799b3f43a759d9a52dedde3ee480d19bc9e255bb3cc1c62a5a8d25] <==
	E0110 01:54:35.265477       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 01:54:35.266232       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 01:54:35.266520       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 01:54:35.266926       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 01:54:35.267049       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 01:54:35.270601       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 01:54:35.277044       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 01:54:35.277319       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 01:54:35.277396       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 01:54:35.277459       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 01:54:35.277607       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 01:54:35.277686       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 01:54:35.277771       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 01:54:35.277830       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 01:54:35.277885       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 01:54:35.277986       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 01:54:36.093656       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 01:54:36.127530       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 01:54:36.135366       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 01:54:36.183369       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 01:54:36.206123       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 01:54:36.349823       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 01:54:36.368124       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E0110 01:54:36.406678       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	I0110 01:54:39.405005       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 01:56:31 addons-106930 kubelet[1271]: I0110 01:56:31.024036    1271 reconciler_common.go:299] "Volume detached for volume \"pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e\" (UniqueName: \"kubernetes.io/host-path/bfe1e91c-7423-46fc-823b-addfb55e70a9-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e\") on node \"addons-106930\" DevicePath \"\""
	Jan 10 01:56:31 addons-106930 kubelet[1271]: I0110 01:56:31.024053    1271 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-725gw\" (UniqueName: \"kubernetes.io/projected/bfe1e91c-7423-46fc-823b-addfb55e70a9-kube-api-access-725gw\") on node \"addons-106930\" DevicePath \"\""
	Jan 10 01:56:31 addons-106930 kubelet[1271]: I0110 01:56:31.764042    1271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bab1951e681d7f52ede01cc05ea449fcba7019c805c1f5c2a12ce0e8cdfb8ed9"
	Jan 10 01:56:32 addons-106930 kubelet[1271]: I0110 01:56:32.440082    1271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/f0794e14-cf54-43d6-9596-8539931b4191-data\") pod \"helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e\" (UID: \"f0794e14-cf54-43d6-9596-8539931b4191\") " pod="local-path-storage/helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e"
	Jan 10 01:56:32 addons-106930 kubelet[1271]: I0110 01:56:32.440608    1271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5t7s\" (UniqueName: \"kubernetes.io/projected/f0794e14-cf54-43d6-9596-8539931b4191-kube-api-access-g5t7s\") pod \"helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e\" (UID: \"f0794e14-cf54-43d6-9596-8539931b4191\") " pod="local-path-storage/helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e"
	Jan 10 01:56:32 addons-106930 kubelet[1271]: I0110 01:56:32.440750    1271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/f0794e14-cf54-43d6-9596-8539931b4191-script\") pod \"helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e\" (UID: \"f0794e14-cf54-43d6-9596-8539931b4191\") " pod="local-path-storage/helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e"
	Jan 10 01:56:32 addons-106930 kubelet[1271]: I0110 01:56:32.440900    1271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f0794e14-cf54-43d6-9596-8539931b4191-gcp-creds\") pod \"helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e\" (UID: \"f0794e14-cf54-43d6-9596-8539931b4191\") " pod="local-path-storage/helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e"
	Jan 10 01:56:32 addons-106930 kubelet[1271]: W0110 01:56:32.641234    1271 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/9d54a65e91dd178badebb1deb926e0202c95a318a422eeb8b0706eb93ffdf62e/crio-508980ec65dcf6b240f145917d6a1789edc7e79f74d4d0fe107bfa725db9030f WatchSource:0}: Error finding container 508980ec65dcf6b240f145917d6a1789edc7e79f74d4d0fe107bfa725db9030f: Status 404 returned error can't find the container with id 508980ec65dcf6b240f145917d6a1789edc7e79f74d4d0fe107bfa725db9030f
	Jan 10 01:56:32 addons-106930 kubelet[1271]: I0110 01:56:32.953632    1271 kubelet_pods.go:1079] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-788cd7d5bc-rcs59" secret="" err="secret \"gcp-auth\" not found"
	Jan 10 01:56:33 addons-106930 kubelet[1271]: I0110 01:56:33.954318    1271 kubelet_pods.go:1079] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-thwrk" secret="" err="secret \"gcp-auth\" not found"
	Jan 10 01:56:33 addons-106930 kubelet[1271]: I0110 01:56:33.964469    1271 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/f0794e14-cf54-43d6-9596-8539931b4191-script\" (UniqueName: \"kubernetes.io/configmap/f0794e14-cf54-43d6-9596-8539931b4191-script\") pod \"f0794e14-cf54-43d6-9596-8539931b4191\" (UID: \"f0794e14-cf54-43d6-9596-8539931b4191\") "
	Jan 10 01:56:33 addons-106930 kubelet[1271]: I0110 01:56:33.964527    1271 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/f0794e14-cf54-43d6-9596-8539931b4191-gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f0794e14-cf54-43d6-9596-8539931b4191-gcp-creds\") pod \"f0794e14-cf54-43d6-9596-8539931b4191\" (UID: \"f0794e14-cf54-43d6-9596-8539931b4191\") "
	Jan 10 01:56:33 addons-106930 kubelet[1271]: I0110 01:56:33.964739    1271 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/f0794e14-cf54-43d6-9596-8539931b4191-data\" (UniqueName: \"kubernetes.io/host-path/f0794e14-cf54-43d6-9596-8539931b4191-data\") pod \"f0794e14-cf54-43d6-9596-8539931b4191\" (UID: \"f0794e14-cf54-43d6-9596-8539931b4191\") "
	Jan 10 01:56:33 addons-106930 kubelet[1271]: I0110 01:56:33.964775    1271 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/f0794e14-cf54-43d6-9596-8539931b4191-kube-api-access-g5t7s\" (UniqueName: \"kubernetes.io/projected/f0794e14-cf54-43d6-9596-8539931b4191-kube-api-access-g5t7s\") pod \"f0794e14-cf54-43d6-9596-8539931b4191\" (UID: \"f0794e14-cf54-43d6-9596-8539931b4191\") "
	Jan 10 01:56:33 addons-106930 kubelet[1271]: I0110 01:56:33.965926    1271 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0794e14-cf54-43d6-9596-8539931b4191-script" pod "f0794e14-cf54-43d6-9596-8539931b4191" (UID: "f0794e14-cf54-43d6-9596-8539931b4191"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Jan 10 01:56:33 addons-106930 kubelet[1271]: I0110 01:56:33.966062    1271 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0794e14-cf54-43d6-9596-8539931b4191-gcp-creds" pod "f0794e14-cf54-43d6-9596-8539931b4191" (UID: "f0794e14-cf54-43d6-9596-8539931b4191"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Jan 10 01:56:33 addons-106930 kubelet[1271]: I0110 01:56:33.966141    1271 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0794e14-cf54-43d6-9596-8539931b4191-data" pod "f0794e14-cf54-43d6-9596-8539931b4191" (UID: "f0794e14-cf54-43d6-9596-8539931b4191"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Jan 10 01:56:33 addons-106930 kubelet[1271]: I0110 01:56:33.967164    1271 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bfe1e91c-7423-46fc-823b-addfb55e70a9" path="/var/lib/kubelet/pods/bfe1e91c-7423-46fc-823b-addfb55e70a9/volumes"
	Jan 10 01:56:33 addons-106930 kubelet[1271]: I0110 01:56:33.971029    1271 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0794e14-cf54-43d6-9596-8539931b4191-kube-api-access-g5t7s" pod "f0794e14-cf54-43d6-9596-8539931b4191" (UID: "f0794e14-cf54-43d6-9596-8539931b4191"). InnerVolumeSpecName "kube-api-access-g5t7s". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Jan 10 01:56:34 addons-106930 kubelet[1271]: I0110 01:56:34.065408    1271 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/f0794e14-cf54-43d6-9596-8539931b4191-script\") on node \"addons-106930\" DevicePath \"\""
	Jan 10 01:56:34 addons-106930 kubelet[1271]: I0110 01:56:34.065456    1271 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f0794e14-cf54-43d6-9596-8539931b4191-gcp-creds\") on node \"addons-106930\" DevicePath \"\""
	Jan 10 01:56:34 addons-106930 kubelet[1271]: I0110 01:56:34.065468    1271 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/f0794e14-cf54-43d6-9596-8539931b4191-data\") on node \"addons-106930\" DevicePath \"\""
	Jan 10 01:56:34 addons-106930 kubelet[1271]: I0110 01:56:34.065478    1271 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g5t7s\" (UniqueName: \"kubernetes.io/projected/f0794e14-cf54-43d6-9596-8539931b4191-kube-api-access-g5t7s\") on node \"addons-106930\" DevicePath \"\""
	Jan 10 01:56:34 addons-106930 kubelet[1271]: I0110 01:56:34.780674    1271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="508980ec65dcf6b240f145917d6a1789edc7e79f74d4d0fe107bfa725db9030f"
	Jan 10 01:56:34 addons-106930 kubelet[1271]: E0110 01:56:34.783090    1271 status_manager.go:1045] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e\" is forbidden: User \"system:node:addons-106930\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-106930' and this object" podUID="f0794e14-cf54-43d6-9596-8539931b4191" pod="local-path-storage/helper-pod-delete-pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e"
	
	
	==> storage-provisioner [3e0c2863ef6c21cbabb9e19c9f8f04a86e18a362494488884d4f1773f7580030] <==
	W0110 01:56:09.038618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:11.041198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:11.047833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:13.051120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:13.057132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:15.060151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:15.067320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:17.070152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:17.074522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:19.078869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:19.083596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:21.086133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:21.092968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:23.096555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:23.107082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:25.110497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:25.115149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:27.118300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:27.123177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:29.127003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:29.131345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:31.134167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:31.139288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:33.187923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 01:56:33.250008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-106930 -n addons-106930
helpers_test.go:270: (dbg) Run:  kubectl --context addons-106930 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-7ct92 ingress-nginx-admission-patch-2chnn registry-creds-567fb78d95-c697l
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-106930 describe pod ingress-nginx-admission-create-7ct92 ingress-nginx-admission-patch-2chnn registry-creds-567fb78d95-c697l
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-106930 describe pod ingress-nginx-admission-create-7ct92 ingress-nginx-admission-patch-2chnn registry-creds-567fb78d95-c697l: exit status 1 (90.846907ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7ct92" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2chnn" not found
	Error from server (NotFound): pods "registry-creds-567fb78d95-c697l" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-106930 describe pod ingress-nginx-admission-create-7ct92 ingress-nginx-admission-patch-2chnn registry-creds-567fb78d95-c697l: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-106930 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-106930 addons disable headlamp --alsologtostderr -v=1: exit status 11 (284.34364ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:56:35.780942   12457 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:56:35.783863   12457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:35.783917   12457 out.go:374] Setting ErrFile to fd 2...
	I0110 01:56:35.783939   12457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:35.784243   12457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 01:56:35.784559   12457 mustload.go:66] Loading cluster: addons-106930
	I0110 01:56:35.784951   12457 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:35.784991   12457 addons.go:622] checking whether the cluster is paused
	I0110 01:56:35.785115   12457 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:35.785143   12457 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:56:35.785654   12457 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:56:35.803874   12457 ssh_runner.go:195] Run: systemctl --version
	I0110 01:56:35.803932   12457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:56:35.829399   12457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:56:35.938104   12457 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:56:35.938188   12457 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:56:35.980198   12457 cri.go:96] found id: "5dafbb5913003c3cb4147e90f1dde30311fabfa04e53c2fe9e8b29032265715e"
	I0110 01:56:35.980221   12457 cri.go:96] found id: "54899c00ceb5888bd4e38adcc09f61d6f4613e853ecdd6b32e4c5bb99d581e72"
	I0110 01:56:35.980227   12457 cri.go:96] found id: "6b948bf13d65c047739bccae4ed739b9f7fa926df29d62f31737a93200f79453"
	I0110 01:56:35.980231   12457 cri.go:96] found id: "7eebd1b695c51c83b430b93742ee73b2ca9993855903b7e6f7ae9d85c8d2fc50"
	I0110 01:56:35.980235   12457 cri.go:96] found id: "d95a18b705959f30d3471889d63995aa4063cde1d6299c4b2f1c667a21a75efd"
	I0110 01:56:35.980238   12457 cri.go:96] found id: "a458547631bdc917e613986d8f7ac6fd7565359fe88471bbce824f1874e6ec32"
	I0110 01:56:35.980242   12457 cri.go:96] found id: "abd9f763f3604b1b1ed81f007194c72054c1a43886832027733e3a1212a8a3b9"
	I0110 01:56:35.980244   12457 cri.go:96] found id: "e27ae252f277861a588db1ed9829c8aa537b82b2413c3d13c83bffbb69308234"
	I0110 01:56:35.980247   12457 cri.go:96] found id: "a9693cc77cf0c484c4c27fa1cdee8533ce5f5d13d0d4db3bbb53a00565b6f118"
	I0110 01:56:35.980269   12457 cri.go:96] found id: "2c8af457d0f780608fcdfa1b8cb701ac07004f54e4c6aa0a8e3274b148b5e61b"
	I0110 01:56:35.980278   12457 cri.go:96] found id: "dbe5d7a996f35eb0c94f70151b1bf92153b3fec733f1ecc9a041cb88ee462f6a"
	I0110 01:56:35.980283   12457 cri.go:96] found id: "a23567b0b791f5de297c9aef90ff8140157bc0649af00ce0e5e30e85bd9ad5ab"
	I0110 01:56:35.980286   12457 cri.go:96] found id: "8bce067de04e9ddb264d6b93bf83726a56f1d31349205b41d7b0321df22fc8d2"
	I0110 01:56:35.980289   12457 cri.go:96] found id: "245c82556c09186d3976774f81d9cd646157daeb5a2ffd046ba607941618606e"
	I0110 01:56:35.980292   12457 cri.go:96] found id: "b6868c2a5d4ce10096791acf437ee992cfdbd960bd8a366942e1351e6d349ca0"
	I0110 01:56:35.980301   12457 cri.go:96] found id: "3e0c2863ef6c21cbabb9e19c9f8f04a86e18a362494488884d4f1773f7580030"
	I0110 01:56:35.980304   12457 cri.go:96] found id: "27448255d1e1c54f8723ee7cca1526b639c1aebd34bdb2d75396a1ac1817885e"
	I0110 01:56:35.980309   12457 cri.go:96] found id: "6f32539141278af9c24103ee0c0a5e69ce237dbbe3695b92b0c5bf3956b30002"
	I0110 01:56:35.980317   12457 cri.go:96] found id: "5a226148b0ac79fc9d466c6da91dac92efac9abb8d69f4c01f8ce6b764e6b7bc"
	I0110 01:56:35.980320   12457 cri.go:96] found id: "014b6f8800d7d2127c3f3e57c97f5ed5d5a2361c80253dc74bd9f42bf604e9c2"
	I0110 01:56:35.980324   12457 cri.go:96] found id: "c951c20416799b3f43a759d9a52dedde3ee480d19bc9e255bb3cc1c62a5a8d25"
	I0110 01:56:35.980327   12457 cri.go:96] found id: "1c96ddd0e76bb0b8b444708102feeda0babec3472e7b33767118eb64739b9136"
	I0110 01:56:35.980331   12457 cri.go:96] found id: "80a58bbf677ab4ba0d96a2e9411138aa345db6101e0e1af2e9ea8754ffd39109"
	I0110 01:56:35.980342   12457 cri.go:96] found id: ""
	I0110 01:56:35.980394   12457 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:56:35.996762   12457 out.go:203] 
	W0110 01:56:35.999656   12457 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:56:35.999691   12457 out.go:285] * 
	* 
	W0110 01:56:36.001364   12457 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:56:36.004285   12457 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-106930 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.38s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-zbt8q" [2a57c103-6fca-44b7-aaf5-c19e1be71325] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002784479s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-106930 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-106930 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (336.472616ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:56:32.369491   11832 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:56:32.369748   11832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:32.369757   11832 out.go:374] Setting ErrFile to fd 2...
	I0110 01:56:32.369763   11832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:32.370037   11832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 01:56:32.370289   11832 mustload.go:66] Loading cluster: addons-106930
	I0110 01:56:32.370667   11832 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:32.370682   11832 addons.go:622] checking whether the cluster is paused
	I0110 01:56:32.370790   11832 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:32.370799   11832 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:56:32.371288   11832 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:56:32.408024   11832 ssh_runner.go:195] Run: systemctl --version
	I0110 01:56:32.408096   11832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:56:32.426383   11832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:56:32.539311   11832 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:56:32.539397   11832 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:56:32.603220   11832 cri.go:96] found id: "5dafbb5913003c3cb4147e90f1dde30311fabfa04e53c2fe9e8b29032265715e"
	I0110 01:56:32.603239   11832 cri.go:96] found id: "54899c00ceb5888bd4e38adcc09f61d6f4613e853ecdd6b32e4c5bb99d581e72"
	I0110 01:56:32.603244   11832 cri.go:96] found id: "6b948bf13d65c047739bccae4ed739b9f7fa926df29d62f31737a93200f79453"
	I0110 01:56:32.603248   11832 cri.go:96] found id: "7eebd1b695c51c83b430b93742ee73b2ca9993855903b7e6f7ae9d85c8d2fc50"
	I0110 01:56:32.603251   11832 cri.go:96] found id: "d95a18b705959f30d3471889d63995aa4063cde1d6299c4b2f1c667a21a75efd"
	I0110 01:56:32.603254   11832 cri.go:96] found id: "a458547631bdc917e613986d8f7ac6fd7565359fe88471bbce824f1874e6ec32"
	I0110 01:56:32.603258   11832 cri.go:96] found id: "abd9f763f3604b1b1ed81f007194c72054c1a43886832027733e3a1212a8a3b9"
	I0110 01:56:32.603260   11832 cri.go:96] found id: "e27ae252f277861a588db1ed9829c8aa537b82b2413c3d13c83bffbb69308234"
	I0110 01:56:32.603264   11832 cri.go:96] found id: "a9693cc77cf0c484c4c27fa1cdee8533ce5f5d13d0d4db3bbb53a00565b6f118"
	I0110 01:56:32.603271   11832 cri.go:96] found id: "2c8af457d0f780608fcdfa1b8cb701ac07004f54e4c6aa0a8e3274b148b5e61b"
	I0110 01:56:32.603274   11832 cri.go:96] found id: "dbe5d7a996f35eb0c94f70151b1bf92153b3fec733f1ecc9a041cb88ee462f6a"
	I0110 01:56:32.603277   11832 cri.go:96] found id: "a23567b0b791f5de297c9aef90ff8140157bc0649af00ce0e5e30e85bd9ad5ab"
	I0110 01:56:32.603280   11832 cri.go:96] found id: "8bce067de04e9ddb264d6b93bf83726a56f1d31349205b41d7b0321df22fc8d2"
	I0110 01:56:32.603283   11832 cri.go:96] found id: "245c82556c09186d3976774f81d9cd646157daeb5a2ffd046ba607941618606e"
	I0110 01:56:32.603286   11832 cri.go:96] found id: "b6868c2a5d4ce10096791acf437ee992cfdbd960bd8a366942e1351e6d349ca0"
	I0110 01:56:32.603293   11832 cri.go:96] found id: "3e0c2863ef6c21cbabb9e19c9f8f04a86e18a362494488884d4f1773f7580030"
	I0110 01:56:32.603296   11832 cri.go:96] found id: "27448255d1e1c54f8723ee7cca1526b639c1aebd34bdb2d75396a1ac1817885e"
	I0110 01:56:32.603301   11832 cri.go:96] found id: "6f32539141278af9c24103ee0c0a5e69ce237dbbe3695b92b0c5bf3956b30002"
	I0110 01:56:32.603304   11832 cri.go:96] found id: "5a226148b0ac79fc9d466c6da91dac92efac9abb8d69f4c01f8ce6b764e6b7bc"
	I0110 01:56:32.603307   11832 cri.go:96] found id: "014b6f8800d7d2127c3f3e57c97f5ed5d5a2361c80253dc74bd9f42bf604e9c2"
	I0110 01:56:32.603311   11832 cri.go:96] found id: "c951c20416799b3f43a759d9a52dedde3ee480d19bc9e255bb3cc1c62a5a8d25"
	I0110 01:56:32.603314   11832 cri.go:96] found id: "1c96ddd0e76bb0b8b444708102feeda0babec3472e7b33767118eb64739b9136"
	I0110 01:56:32.603317   11832 cri.go:96] found id: "80a58bbf677ab4ba0d96a2e9411138aa345db6101e0e1af2e9ea8754ffd39109"
	I0110 01:56:32.603320   11832 cri.go:96] found id: ""
	I0110 01:56:32.603367   11832 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:56:32.627650   11832 out.go:203] 
	W0110 01:56:32.630751   11832 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:56:32.630779   11832 out.go:285] * 
	* 
	W0110 01:56:32.632438   11832 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:56:32.635788   11832 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-106930 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.35s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.41s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-106930 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-106930 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-106930 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [bfe1e91c-7423-46fc-823b-addfb55e70a9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [bfe1e91c-7423-46fc-823b-addfb55e70a9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [bfe1e91c-7423-46fc-823b-addfb55e70a9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003024572s
addons_test.go:969: (dbg) Run:  kubectl --context addons-106930 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-106930 ssh "cat /opt/local-path-provisioner/pvc-dd421472-0250-4d56-a94b-a5aa6c045a0e_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-106930 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-106930 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-106930 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-106930 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (309.479071ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:56:32.379759   11837 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:56:32.379983   11837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:32.379997   11837 out.go:374] Setting ErrFile to fd 2...
	I0110 01:56:32.380004   11837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:32.380361   11837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 01:56:32.380757   11837 mustload.go:66] Loading cluster: addons-106930
	I0110 01:56:32.381895   11837 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:32.381958   11837 addons.go:622] checking whether the cluster is paused
	I0110 01:56:32.382134   11837 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:32.382173   11837 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:56:32.382890   11837 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:56:32.410783   11837 ssh_runner.go:195] Run: systemctl --version
	I0110 01:56:32.410845   11837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:56:32.433093   11837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:56:32.543657   11837 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:56:32.543741   11837 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:56:32.588278   11837 cri.go:96] found id: "5dafbb5913003c3cb4147e90f1dde30311fabfa04e53c2fe9e8b29032265715e"
	I0110 01:56:32.588296   11837 cri.go:96] found id: "54899c00ceb5888bd4e38adcc09f61d6f4613e853ecdd6b32e4c5bb99d581e72"
	I0110 01:56:32.588301   11837 cri.go:96] found id: "6b948bf13d65c047739bccae4ed739b9f7fa926df29d62f31737a93200f79453"
	I0110 01:56:32.588305   11837 cri.go:96] found id: "7eebd1b695c51c83b430b93742ee73b2ca9993855903b7e6f7ae9d85c8d2fc50"
	I0110 01:56:32.588308   11837 cri.go:96] found id: "d95a18b705959f30d3471889d63995aa4063cde1d6299c4b2f1c667a21a75efd"
	I0110 01:56:32.588312   11837 cri.go:96] found id: "a458547631bdc917e613986d8f7ac6fd7565359fe88471bbce824f1874e6ec32"
	I0110 01:56:32.588316   11837 cri.go:96] found id: "abd9f763f3604b1b1ed81f007194c72054c1a43886832027733e3a1212a8a3b9"
	I0110 01:56:32.588320   11837 cri.go:96] found id: "e27ae252f277861a588db1ed9829c8aa537b82b2413c3d13c83bffbb69308234"
	I0110 01:56:32.588323   11837 cri.go:96] found id: "a9693cc77cf0c484c4c27fa1cdee8533ce5f5d13d0d4db3bbb53a00565b6f118"
	I0110 01:56:32.588331   11837 cri.go:96] found id: "2c8af457d0f780608fcdfa1b8cb701ac07004f54e4c6aa0a8e3274b148b5e61b"
	I0110 01:56:32.588334   11837 cri.go:96] found id: "dbe5d7a996f35eb0c94f70151b1bf92153b3fec733f1ecc9a041cb88ee462f6a"
	I0110 01:56:32.588337   11837 cri.go:96] found id: "a23567b0b791f5de297c9aef90ff8140157bc0649af00ce0e5e30e85bd9ad5ab"
	I0110 01:56:32.588342   11837 cri.go:96] found id: "8bce067de04e9ddb264d6b93bf83726a56f1d31349205b41d7b0321df22fc8d2"
	I0110 01:56:32.588346   11837 cri.go:96] found id: "245c82556c09186d3976774f81d9cd646157daeb5a2ffd046ba607941618606e"
	I0110 01:56:32.588349   11837 cri.go:96] found id: "b6868c2a5d4ce10096791acf437ee992cfdbd960bd8a366942e1351e6d349ca0"
	I0110 01:56:32.588353   11837 cri.go:96] found id: "3e0c2863ef6c21cbabb9e19c9f8f04a86e18a362494488884d4f1773f7580030"
	I0110 01:56:32.588357   11837 cri.go:96] found id: "27448255d1e1c54f8723ee7cca1526b639c1aebd34bdb2d75396a1ac1817885e"
	I0110 01:56:32.588361   11837 cri.go:96] found id: "6f32539141278af9c24103ee0c0a5e69ce237dbbe3695b92b0c5bf3956b30002"
	I0110 01:56:32.588364   11837 cri.go:96] found id: "5a226148b0ac79fc9d466c6da91dac92efac9abb8d69f4c01f8ce6b764e6b7bc"
	I0110 01:56:32.588368   11837 cri.go:96] found id: "014b6f8800d7d2127c3f3e57c97f5ed5d5a2361c80253dc74bd9f42bf604e9c2"
	I0110 01:56:32.588373   11837 cri.go:96] found id: "c951c20416799b3f43a759d9a52dedde3ee480d19bc9e255bb3cc1c62a5a8d25"
	I0110 01:56:32.588376   11837 cri.go:96] found id: "1c96ddd0e76bb0b8b444708102feeda0babec3472e7b33767118eb64739b9136"
	I0110 01:56:32.588379   11837 cri.go:96] found id: "80a58bbf677ab4ba0d96a2e9411138aa345db6101e0e1af2e9ea8754ffd39109"
	I0110 01:56:32.588382   11837 cri.go:96] found id: ""
	I0110 01:56:32.588430   11837 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:56:32.620311   11837 out.go:203] 
	W0110 01:56:32.623663   11837 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:56:32.623684   11837 out.go:285] * 
	* 
	W0110 01:56:32.625367   11837 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:56:32.627762   11837 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-106930 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.41s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-thwrk" [16277925-8e1a-4395-ae02-badbd996a408] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003323128s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-106930 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-106930 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (250.714008ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:56:23.020562   11376 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:56:23.020780   11376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:23.020792   11376 out.go:374] Setting ErrFile to fd 2...
	I0110 01:56:23.020798   11376 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:23.021084   11376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 01:56:23.021398   11376 mustload.go:66] Loading cluster: addons-106930
	I0110 01:56:23.021807   11376 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:23.021833   11376 addons.go:622] checking whether the cluster is paused
	I0110 01:56:23.021976   11376 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:23.021996   11376 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:56:23.022566   11376 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:56:23.040331   11376 ssh_runner.go:195] Run: systemctl --version
	I0110 01:56:23.040394   11376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:56:23.062881   11376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:56:23.171076   11376 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:56:23.171221   11376 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:56:23.198542   11376 cri.go:96] found id: "5dafbb5913003c3cb4147e90f1dde30311fabfa04e53c2fe9e8b29032265715e"
	I0110 01:56:23.198561   11376 cri.go:96] found id: "54899c00ceb5888bd4e38adcc09f61d6f4613e853ecdd6b32e4c5bb99d581e72"
	I0110 01:56:23.198568   11376 cri.go:96] found id: "6b948bf13d65c047739bccae4ed739b9f7fa926df29d62f31737a93200f79453"
	I0110 01:56:23.198572   11376 cri.go:96] found id: "7eebd1b695c51c83b430b93742ee73b2ca9993855903b7e6f7ae9d85c8d2fc50"
	I0110 01:56:23.198575   11376 cri.go:96] found id: "d95a18b705959f30d3471889d63995aa4063cde1d6299c4b2f1c667a21a75efd"
	I0110 01:56:23.198579   11376 cri.go:96] found id: "a458547631bdc917e613986d8f7ac6fd7565359fe88471bbce824f1874e6ec32"
	I0110 01:56:23.198582   11376 cri.go:96] found id: "abd9f763f3604b1b1ed81f007194c72054c1a43886832027733e3a1212a8a3b9"
	I0110 01:56:23.198585   11376 cri.go:96] found id: "e27ae252f277861a588db1ed9829c8aa537b82b2413c3d13c83bffbb69308234"
	I0110 01:56:23.198588   11376 cri.go:96] found id: "a9693cc77cf0c484c4c27fa1cdee8533ce5f5d13d0d4db3bbb53a00565b6f118"
	I0110 01:56:23.198596   11376 cri.go:96] found id: "2c8af457d0f780608fcdfa1b8cb701ac07004f54e4c6aa0a8e3274b148b5e61b"
	I0110 01:56:23.198599   11376 cri.go:96] found id: "dbe5d7a996f35eb0c94f70151b1bf92153b3fec733f1ecc9a041cb88ee462f6a"
	I0110 01:56:23.198602   11376 cri.go:96] found id: "a23567b0b791f5de297c9aef90ff8140157bc0649af00ce0e5e30e85bd9ad5ab"
	I0110 01:56:23.198606   11376 cri.go:96] found id: "8bce067de04e9ddb264d6b93bf83726a56f1d31349205b41d7b0321df22fc8d2"
	I0110 01:56:23.198609   11376 cri.go:96] found id: "245c82556c09186d3976774f81d9cd646157daeb5a2ffd046ba607941618606e"
	I0110 01:56:23.198612   11376 cri.go:96] found id: "b6868c2a5d4ce10096791acf437ee992cfdbd960bd8a366942e1351e6d349ca0"
	I0110 01:56:23.198621   11376 cri.go:96] found id: "3e0c2863ef6c21cbabb9e19c9f8f04a86e18a362494488884d4f1773f7580030"
	I0110 01:56:23.198624   11376 cri.go:96] found id: "27448255d1e1c54f8723ee7cca1526b639c1aebd34bdb2d75396a1ac1817885e"
	I0110 01:56:23.198631   11376 cri.go:96] found id: "6f32539141278af9c24103ee0c0a5e69ce237dbbe3695b92b0c5bf3956b30002"
	I0110 01:56:23.198634   11376 cri.go:96] found id: "5a226148b0ac79fc9d466c6da91dac92efac9abb8d69f4c01f8ce6b764e6b7bc"
	I0110 01:56:23.198637   11376 cri.go:96] found id: "014b6f8800d7d2127c3f3e57c97f5ed5d5a2361c80253dc74bd9f42bf604e9c2"
	I0110 01:56:23.198643   11376 cri.go:96] found id: "c951c20416799b3f43a759d9a52dedde3ee480d19bc9e255bb3cc1c62a5a8d25"
	I0110 01:56:23.198646   11376 cri.go:96] found id: "1c96ddd0e76bb0b8b444708102feeda0babec3472e7b33767118eb64739b9136"
	I0110 01:56:23.198649   11376 cri.go:96] found id: "80a58bbf677ab4ba0d96a2e9411138aa345db6101e0e1af2e9ea8754ffd39109"
	I0110 01:56:23.198652   11376 cri.go:96] found id: ""
	I0110 01:56:23.198699   11376 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:56:23.212856   11376 out.go:203] 
	W0110 01:56:23.215874   11376 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:56:23.215896   11376 out.go:285] * 
	* 
	W0110 01:56:23.217547   11376 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:56:23.220413   11376 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-106930 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-tp82j" [4531fecc-ddc0-4e02-baee-ab954b59e840] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003531836s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-106930 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-106930 addons disable yakd --alsologtostderr -v=1: exit status 11 (253.807585ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 01:56:16.768393   11279 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:56:16.768788   11279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:16.768818   11279 out.go:374] Setting ErrFile to fd 2...
	I0110 01:56:16.768847   11279 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:56:16.769251   11279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 01:56:16.769640   11279 mustload.go:66] Loading cluster: addons-106930
	I0110 01:56:16.770382   11279 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:16.770429   11279 addons.go:622] checking whether the cluster is paused
	I0110 01:56:16.770614   11279 config.go:182] Loaded profile config "addons-106930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 01:56:16.770648   11279 host.go:66] Checking if "addons-106930" exists ...
	I0110 01:56:16.771481   11279 cli_runner.go:164] Run: docker container inspect addons-106930 --format={{.State.Status}}
	I0110 01:56:16.790897   11279 ssh_runner.go:195] Run: systemctl --version
	I0110 01:56:16.790956   11279 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-106930
	I0110 01:56:16.807988   11279 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/addons-106930/id_rsa Username:docker}
	I0110 01:56:16.914044   11279 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 01:56:16.914121   11279 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 01:56:16.942434   11279 cri.go:96] found id: "5dafbb5913003c3cb4147e90f1dde30311fabfa04e53c2fe9e8b29032265715e"
	I0110 01:56:16.942511   11279 cri.go:96] found id: "54899c00ceb5888bd4e38adcc09f61d6f4613e853ecdd6b32e4c5bb99d581e72"
	I0110 01:56:16.942532   11279 cri.go:96] found id: "6b948bf13d65c047739bccae4ed739b9f7fa926df29d62f31737a93200f79453"
	I0110 01:56:16.942550   11279 cri.go:96] found id: "7eebd1b695c51c83b430b93742ee73b2ca9993855903b7e6f7ae9d85c8d2fc50"
	I0110 01:56:16.942585   11279 cri.go:96] found id: "d95a18b705959f30d3471889d63995aa4063cde1d6299c4b2f1c667a21a75efd"
	I0110 01:56:16.942607   11279 cri.go:96] found id: "a458547631bdc917e613986d8f7ac6fd7565359fe88471bbce824f1874e6ec32"
	I0110 01:56:16.942626   11279 cri.go:96] found id: "abd9f763f3604b1b1ed81f007194c72054c1a43886832027733e3a1212a8a3b9"
	I0110 01:56:16.942651   11279 cri.go:96] found id: "e27ae252f277861a588db1ed9829c8aa537b82b2413c3d13c83bffbb69308234"
	I0110 01:56:16.942687   11279 cri.go:96] found id: "a9693cc77cf0c484c4c27fa1cdee8533ce5f5d13d0d4db3bbb53a00565b6f118"
	I0110 01:56:16.942708   11279 cri.go:96] found id: "2c8af457d0f780608fcdfa1b8cb701ac07004f54e4c6aa0a8e3274b148b5e61b"
	I0110 01:56:16.942726   11279 cri.go:96] found id: "dbe5d7a996f35eb0c94f70151b1bf92153b3fec733f1ecc9a041cb88ee462f6a"
	I0110 01:56:16.942759   11279 cri.go:96] found id: "a23567b0b791f5de297c9aef90ff8140157bc0649af00ce0e5e30e85bd9ad5ab"
	I0110 01:56:16.942781   11279 cri.go:96] found id: "8bce067de04e9ddb264d6b93bf83726a56f1d31349205b41d7b0321df22fc8d2"
	I0110 01:56:16.942799   11279 cri.go:96] found id: "245c82556c09186d3976774f81d9cd646157daeb5a2ffd046ba607941618606e"
	I0110 01:56:16.942820   11279 cri.go:96] found id: "b6868c2a5d4ce10096791acf437ee992cfdbd960bd8a366942e1351e6d349ca0"
	I0110 01:56:16.942858   11279 cri.go:96] found id: "3e0c2863ef6c21cbabb9e19c9f8f04a86e18a362494488884d4f1773f7580030"
	I0110 01:56:16.942880   11279 cri.go:96] found id: "27448255d1e1c54f8723ee7cca1526b639c1aebd34bdb2d75396a1ac1817885e"
	I0110 01:56:16.942900   11279 cri.go:96] found id: "6f32539141278af9c24103ee0c0a5e69ce237dbbe3695b92b0c5bf3956b30002"
	I0110 01:56:16.942936   11279 cri.go:96] found id: "5a226148b0ac79fc9d466c6da91dac92efac9abb8d69f4c01f8ce6b764e6b7bc"
	I0110 01:56:16.942956   11279 cri.go:96] found id: "014b6f8800d7d2127c3f3e57c97f5ed5d5a2361c80253dc74bd9f42bf604e9c2"
	I0110 01:56:16.942975   11279 cri.go:96] found id: "c951c20416799b3f43a759d9a52dedde3ee480d19bc9e255bb3cc1c62a5a8d25"
	I0110 01:56:16.942993   11279 cri.go:96] found id: "1c96ddd0e76bb0b8b444708102feeda0babec3472e7b33767118eb64739b9136"
	I0110 01:56:16.943025   11279 cri.go:96] found id: "80a58bbf677ab4ba0d96a2e9411138aa345db6101e0e1af2e9ea8754ffd39109"
	I0110 01:56:16.943047   11279 cri.go:96] found id: ""
	I0110 01:56:16.943131   11279 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 01:56:16.958215   11279 out.go:203] 
	W0110 01:56:16.960999   11279 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:56:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 01:56:16.961023   11279 out.go:285] * 
	* 
	W0110 01:56:16.962712   11279 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 01:56:16.965560   11279 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-106930 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestForceSystemdFlag (507.4s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-038359 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-038359 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 109 (8m22.645663446s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-038359] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-038359" primary control-plane node in "force-systemd-flag-038359" cluster
	* Pulling base image v0.0.48-1767944074-22401 ...
	* Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:40:34.944502  190834 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:40:34.944636  190834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:40:34.944646  190834 out.go:374] Setting ErrFile to fd 2...
	I0110 02:40:34.944652  190834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:40:34.944905  190834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:40:34.945319  190834 out.go:368] Setting JSON to false
	I0110 02:40:34.946128  190834 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4984,"bootTime":1768007851,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:40:34.946196  190834 start.go:143] virtualization:  
	I0110 02:40:34.949943  190834 out.go:179] * [force-systemd-flag-038359] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:40:34.954405  190834 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:40:34.954550  190834 notify.go:221] Checking for updates...
	I0110 02:40:34.962073  190834 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:40:34.965352  190834 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:40:34.968502  190834 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:40:34.971655  190834 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:40:34.975049  190834 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:40:34.978712  190834 config.go:182] Loaded profile config "force-systemd-env-088457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:40:34.978864  190834 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:40:35.005244  190834 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:40:35.005379  190834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:40:35.067761  190834 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:40:35.05878014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:40:35.067930  190834 docker.go:319] overlay module found
	I0110 02:40:35.072665  190834 out.go:179] * Using the docker driver based on user configuration
	I0110 02:40:35.075618  190834 start.go:309] selected driver: docker
	I0110 02:40:35.075634  190834 start.go:928] validating driver "docker" against <nil>
	I0110 02:40:35.075648  190834 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:40:35.076468  190834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:40:35.135115  190834 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:40:35.125911054 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:40:35.135274  190834 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 02:40:35.135531  190834 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 02:40:35.138618  190834 out.go:179] * Using Docker driver with root privileges
	I0110 02:40:35.141654  190834 cni.go:84] Creating CNI manager for ""
	I0110 02:40:35.141718  190834 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:40:35.141732  190834 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 02:40:35.141821  190834 start.go:353] cluster config:
	{Name:force-systemd-flag-038359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-038359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:40:35.144904  190834 out.go:179] * Starting "force-systemd-flag-038359" primary control-plane node in "force-systemd-flag-038359" cluster
	I0110 02:40:35.147789  190834 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:40:35.150933  190834 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:40:35.153896  190834 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:40:35.153864  190834 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:40:35.153995  190834 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 02:40:35.154005  190834 cache.go:65] Caching tarball of preloaded images
	I0110 02:40:35.154091  190834 preload.go:251] Found /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 02:40:35.154101  190834 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:40:35.154216  190834 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/config.json ...
	I0110 02:40:35.154235  190834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/config.json: {Name:mkd7f432d87646b77f41ac9d01b0d3f1947185db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:40:35.202872  190834 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:40:35.202900  190834 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:40:35.202915  190834 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:40:35.202946  190834 start.go:360] acquireMachinesLock for force-systemd-flag-038359: {Name:mk2df15322c6a2e3c70c612564bce9d9870c5bba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:40:35.203079  190834 start.go:364] duration metric: took 118.109µs to acquireMachinesLock for "force-systemd-flag-038359"
	I0110 02:40:35.203107  190834 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-038359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-038359 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:40:35.203183  190834 start.go:125] createHost starting for "" (driver="docker")
	I0110 02:40:35.206679  190834 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:40:35.206980  190834 start.go:159] libmachine.API.Create for "force-systemd-flag-038359" (driver="docker")
	I0110 02:40:35.207032  190834 client.go:173] LocalClient.Create starting
	I0110 02:40:35.207176  190834 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem
	I0110 02:40:35.207273  190834 main.go:144] libmachine: Decoding PEM data...
	I0110 02:40:35.207310  190834 main.go:144] libmachine: Parsing certificate...
	I0110 02:40:35.207441  190834 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem
	I0110 02:40:35.207506  190834 main.go:144] libmachine: Decoding PEM data...
	I0110 02:40:35.207544  190834 main.go:144] libmachine: Parsing certificate...
	I0110 02:40:35.208108  190834 cli_runner.go:164] Run: docker network inspect force-systemd-flag-038359 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:40:35.235486  190834 cli_runner.go:211] docker network inspect force-systemd-flag-038359 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:40:35.235584  190834 network_create.go:284] running [docker network inspect force-systemd-flag-038359] to gather additional debugging logs...
	I0110 02:40:35.235606  190834 cli_runner.go:164] Run: docker network inspect force-systemd-flag-038359
	W0110 02:40:35.260377  190834 cli_runner.go:211] docker network inspect force-systemd-flag-038359 returned with exit code 1
	I0110 02:40:35.260413  190834 network_create.go:287] error running [docker network inspect force-systemd-flag-038359]: docker network inspect force-systemd-flag-038359: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-038359 not found
	I0110 02:40:35.260428  190834 network_create.go:289] output of [docker network inspect force-systemd-flag-038359]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-038359 not found
	
	** /stderr **
	I0110 02:40:35.260516  190834 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:40:35.276949  190834 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dba69832168e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:28:cf:b8:5c:6c} reservation:<nil>}
	I0110 02:40:35.277231  190834 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1bcca9209747 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d2:8b:83:a4:43:ae} reservation:<nil>}
	I0110 02:40:35.277534  190834 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-383ddfacc8f8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:26:ff:fb:97:66:af} reservation:<nil>}
	I0110 02:40:35.277830  190834 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d16f11dcaaec IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:86:ad:01:73:d1:58} reservation:<nil>}
	I0110 02:40:35.278234  190834 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a40ee0}
	I0110 02:40:35.278262  190834 network_create.go:124] attempt to create docker network force-systemd-flag-038359 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 02:40:35.278320  190834 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-038359 force-systemd-flag-038359
	I0110 02:40:35.340331  190834 network_create.go:108] docker network force-systemd-flag-038359 192.168.85.0/24 created
	I0110 02:40:35.340363  190834 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-038359" container
	I0110 02:40:35.340453  190834 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:40:35.354829  190834 cli_runner.go:164] Run: docker volume create force-systemd-flag-038359 --label name.minikube.sigs.k8s.io=force-systemd-flag-038359 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:40:35.371976  190834 oci.go:103] Successfully created a docker volume force-systemd-flag-038359
	I0110 02:40:35.372067  190834 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-038359-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-038359 --entrypoint /usr/bin/test -v force-systemd-flag-038359:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:40:35.906617  190834 oci.go:107] Successfully prepared a docker volume force-systemd-flag-038359
	I0110 02:40:35.906668  190834 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:40:35.906678  190834 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:40:35.906761  190834 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-038359:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:40:39.821199  190834 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-038359:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.914403019s)
	I0110 02:40:39.821230  190834 kic.go:203] duration metric: took 3.914548239s to extract preloaded images to volume ...
	W0110 02:40:39.821363  190834 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 02:40:39.821472  190834 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:40:39.875879  190834 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-038359 --name force-systemd-flag-038359 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-038359 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-038359 --network force-systemd-flag-038359 --ip 192.168.85.2 --volume force-systemd-flag-038359:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 02:40:40.238773  190834 cli_runner.go:164] Run: docker container inspect force-systemd-flag-038359 --format={{.State.Running}}
	I0110 02:40:40.262549  190834 cli_runner.go:164] Run: docker container inspect force-systemd-flag-038359 --format={{.State.Status}}
	I0110 02:40:40.285430  190834 cli_runner.go:164] Run: docker exec force-systemd-flag-038359 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:40:40.334155  190834 oci.go:144] the created container "force-systemd-flag-038359" has a running status.
	I0110 02:40:40.334424  190834 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-flag-038359/id_rsa...
	I0110 02:40:40.949209  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-flag-038359/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0110 02:40:40.949263  190834 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-flag-038359/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:40:40.968573  190834 cli_runner.go:164] Run: docker container inspect force-systemd-flag-038359 --format={{.State.Status}}
	I0110 02:40:40.985291  190834 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:40:40.985316  190834 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-038359 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:40:41.026402  190834 cli_runner.go:164] Run: docker container inspect force-systemd-flag-038359 --format={{.State.Status}}
	I0110 02:40:41.042806  190834 machine.go:94] provisionDockerMachine start ...
	I0110 02:40:41.042892  190834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-038359
	I0110 02:40:41.060224  190834 main.go:144] libmachine: Using SSH client type: native
	I0110 02:40:41.060568  190834 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I0110 02:40:41.060585  190834 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:40:41.061206  190834 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55550->127.0.0.1:33038: read: connection reset by peer
	I0110 02:40:44.211650  190834 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-038359
	
	I0110 02:40:44.211672  190834 ubuntu.go:182] provisioning hostname "force-systemd-flag-038359"
	I0110 02:40:44.211736  190834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-038359
	I0110 02:40:44.234462  190834 main.go:144] libmachine: Using SSH client type: native
	I0110 02:40:44.234776  190834 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I0110 02:40:44.234787  190834 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-038359 && echo "force-systemd-flag-038359" | sudo tee /etc/hostname
	I0110 02:40:44.392717  190834 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-038359
	
	I0110 02:40:44.392834  190834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-038359
	I0110 02:40:44.410530  190834 main.go:144] libmachine: Using SSH client type: native
	I0110 02:40:44.410841  190834 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I0110 02:40:44.410864  190834 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-038359' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-038359/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-038359' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:40:44.556086  190834 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:40:44.556111  190834 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 02:40:44.556140  190834 ubuntu.go:190] setting up certificates
	I0110 02:40:44.556151  190834 provision.go:84] configureAuth start
	I0110 02:40:44.556226  190834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-038359
	I0110 02:40:44.573941  190834 provision.go:143] copyHostCerts
	I0110 02:40:44.573990  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:40:44.574024  190834 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem, removing ...
	I0110 02:40:44.574037  190834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:40:44.574119  190834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 02:40:44.574238  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:40:44.574261  190834 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem, removing ...
	I0110 02:40:44.574269  190834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:40:44.574298  190834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 02:40:44.574350  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:40:44.574371  190834 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem, removing ...
	I0110 02:40:44.574375  190834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:40:44.574400  190834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 02:40:44.574462  190834 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-038359 san=[127.0.0.1 192.168.85.2 force-systemd-flag-038359 localhost minikube]
	I0110 02:40:44.814297  190834 provision.go:177] copyRemoteCerts
	I0110 02:40:44.814369  190834 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:40:44.814411  190834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-038359
	I0110 02:40:44.831134  190834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-flag-038359/id_rsa Username:docker}
	I0110 02:40:44.936929  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0110 02:40:44.937056  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:40:44.961717  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0110 02:40:44.961843  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0110 02:40:44.983543  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0110 02:40:44.983605  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:40:45.001940  190834 provision.go:87] duration metric: took 445.770853ms to configureAuth
	I0110 02:40:45.001967  190834 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:40:45.002190  190834 config.go:182] Loaded profile config "force-systemd-flag-038359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:40:45.002297  190834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-038359
	I0110 02:40:45.042837  190834 main.go:144] libmachine: Using SSH client type: native
	I0110 02:40:45.043160  190834 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I0110 02:40:45.043174  190834 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:40:45.403873  190834 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:40:45.403965  190834 machine.go:97] duration metric: took 4.361126495s to provisionDockerMachine
	I0110 02:40:45.403977  190834 client.go:176] duration metric: took 10.196894815s to LocalClient.Create
	I0110 02:40:45.403988  190834 start.go:167] duration metric: took 10.197009117s to libmachine.API.Create "force-systemd-flag-038359"
	I0110 02:40:45.403997  190834 start.go:293] postStartSetup for "force-systemd-flag-038359" (driver="docker")
	I0110 02:40:45.404021  190834 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:40:45.404280  190834 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:40:45.404488  190834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-038359
	I0110 02:40:45.423546  190834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-flag-038359/id_rsa Username:docker}
	I0110 02:40:45.532084  190834 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:40:45.535722  190834 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:40:45.535751  190834 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:40:45.535763  190834 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:40:45.535841  190834 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:40:45.535932  190834 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:40:45.535943  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> /etc/ssl/certs/41682.pem
	I0110 02:40:45.536042  190834 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:40:45.543403  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:40:45.561065  190834 start.go:296] duration metric: took 157.040409ms for postStartSetup
	I0110 02:40:45.561424  190834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-038359
	I0110 02:40:45.577914  190834 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/config.json ...
	I0110 02:40:45.578198  190834 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:40:45.578238  190834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-038359
	I0110 02:40:45.595012  190834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-flag-038359/id_rsa Username:docker}
	I0110 02:40:45.693191  190834 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:40:45.699578  190834 start.go:128] duration metric: took 10.496380194s to createHost
	I0110 02:40:45.699599  190834 start.go:83] releasing machines lock for "force-systemd-flag-038359", held for 10.496511685s
	I0110 02:40:45.699672  190834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-038359
	I0110 02:40:45.717700  190834 ssh_runner.go:195] Run: cat /version.json
	I0110 02:40:45.717750  190834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-038359
	I0110 02:40:45.717774  190834 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:40:45.717835  190834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-038359
	I0110 02:40:45.744589  190834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-flag-038359/id_rsa Username:docker}
	I0110 02:40:45.761323  190834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-flag-038359/id_rsa Username:docker}
	I0110 02:40:45.847376  190834 ssh_runner.go:195] Run: systemctl --version
	I0110 02:40:45.952627  190834 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:40:45.997307  190834 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:40:46.001599  190834 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:40:46.001671  190834 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:40:46.031377  190834 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 02:40:46.031452  190834 start.go:496] detecting cgroup driver to use...
	I0110 02:40:46.031480  190834 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 02:40:46.031575  190834 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:40:46.048742  190834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:40:46.062097  190834 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:40:46.062163  190834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:40:46.080253  190834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:40:46.099779  190834 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:40:46.213218  190834 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:40:46.332605  190834 docker.go:234] disabling docker service ...
	I0110 02:40:46.332678  190834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:40:46.353254  190834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:40:46.366472  190834 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:40:46.502538  190834 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:40:46.619599  190834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:40:46.632499  190834 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:40:46.645800  190834 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:40:46.645881  190834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:40:46.654423  190834 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 02:40:46.654491  190834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:40:46.664375  190834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:40:46.673006  190834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:40:46.681841  190834 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:40:46.689784  190834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:40:46.698245  190834 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:40:46.711547  190834 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:40:46.720303  190834 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:40:46.727637  190834 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:40:46.734885  190834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:40:46.845207  190834 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:40:47.002772  190834 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:40:47.002854  190834 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:40:47.006743  190834 start.go:574] Will wait 60s for crictl version
	I0110 02:40:47.006871  190834 ssh_runner.go:195] Run: which crictl
	I0110 02:40:47.011036  190834 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:40:47.034983  190834 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:40:47.035063  190834 ssh_runner.go:195] Run: crio --version
	I0110 02:40:47.062515  190834 ssh_runner.go:195] Run: crio --version
	I0110 02:40:47.095680  190834 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:40:47.098559  190834 cli_runner.go:164] Run: docker network inspect force-systemd-flag-038359 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:40:47.114186  190834 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 02:40:47.117755  190834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:40:47.127040  190834 kubeadm.go:884] updating cluster {Name:force-systemd-flag-038359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-038359 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:40:47.127150  190834 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:40:47.127212  190834 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:40:47.163981  190834 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:40:47.164005  190834 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:40:47.164059  190834 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:40:47.197690  190834 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:40:47.197710  190834 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:40:47.197717  190834 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0110 02:40:47.197821  190834 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-038359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-038359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:40:47.197901  190834 ssh_runner.go:195] Run: crio config
	I0110 02:40:47.271211  190834 cni.go:84] Creating CNI manager for ""
	I0110 02:40:47.271234  190834 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:40:47.271250  190834 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:40:47.271271  190834 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-038359 NodeName:force-systemd-flag-038359 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:40:47.271414  190834 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-038359"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:40:47.271495  190834 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:40:47.279037  190834 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:40:47.279140  190834 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:40:47.286432  190834 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0110 02:40:47.299136  190834 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:40:47.311712  190834 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0110 02:40:47.324725  190834 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:40:47.328309  190834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:40:47.337593  190834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:40:47.453947  190834 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:40:47.470677  190834 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359 for IP: 192.168.85.2
	I0110 02:40:47.470717  190834 certs.go:195] generating shared ca certs ...
	I0110 02:40:47.470734  190834 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:40:47.470890  190834 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:40:47.470950  190834 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:40:47.470963  190834 certs.go:257] generating profile certs ...
	I0110 02:40:47.471029  190834 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/client.key
	I0110 02:40:47.471046  190834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/client.crt with IP's: []
	I0110 02:40:47.534793  190834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/client.crt ...
	I0110 02:40:47.534824  190834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/client.crt: {Name:mk9068837c6c8383975dad8341ce74c1b3c1e57c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:40:47.535017  190834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/client.key ...
	I0110 02:40:47.535032  190834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/client.key: {Name:mk783a300ec5c23d62425fe2d5bfd807023e0b6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:40:47.535126  190834 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.key.e7e177af
	I0110 02:40:47.535144  190834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.crt.e7e177af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0110 02:40:48.076389  190834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.crt.e7e177af ...
	I0110 02:40:48.076424  190834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.crt.e7e177af: {Name:mkeb6b0dd9ca6d2b1956a0b711fe2ee9db8bcbe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:40:48.076632  190834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.key.e7e177af ...
	I0110 02:40:48.076649  190834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.key.e7e177af: {Name:mk1a7b696dc969079486c70f177199e2a27ee94e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:40:48.076740  190834 certs.go:382] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.crt.e7e177af -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.crt
	I0110 02:40:48.076822  190834 certs.go:386] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.key.e7e177af -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.key
	I0110 02:40:48.076881  190834 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.key
	I0110 02:40:48.076899  190834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.crt with IP's: []
	I0110 02:40:48.446506  190834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.crt ...
	I0110 02:40:48.446539  190834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.crt: {Name:mk716e4378e7435584b6b60a78214a44f7210922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:40:48.446724  190834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.key ...
	I0110 02:40:48.446739  190834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.key: {Name:mkd1f41df2eb7d273f6066f14f3710f365fbea1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:40:48.446820  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0110 02:40:48.446843  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0110 02:40:48.446860  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0110 02:40:48.446881  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0110 02:40:48.446902  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0110 02:40:48.446914  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0110 02:40:48.446929  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0110 02:40:48.446939  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0110 02:40:48.446989  190834 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:40:48.447035  190834 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:40:48.447048  190834 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:40:48.447076  190834 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:40:48.447103  190834 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:40:48.447132  190834 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:40:48.447177  190834 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:40:48.447211  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:40:48.447226  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem -> /usr/share/ca-certificates/4168.pem
	I0110 02:40:48.447239  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> /usr/share/ca-certificates/41682.pem
	I0110 02:40:48.447864  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:40:48.467051  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:40:48.484763  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:40:48.503563  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:40:48.520254  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0110 02:40:48.537881  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 02:40:48.555233  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:40:48.572166  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:40:48.589572  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:40:48.606589  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:40:48.623146  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:40:48.640116  190834 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:40:48.652506  190834 ssh_runner.go:195] Run: openssl version
	I0110 02:40:48.658604  190834 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:40:48.666481  190834 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:40:48.675186  190834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:40:48.679294  190834 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:40:48.679357  190834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:40:48.721246  190834 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:40:48.729550  190834 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:40:48.737253  190834 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:40:48.744828  190834 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:40:48.751643  190834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:40:48.755102  190834 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:40:48.755203  190834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:40:48.795628  190834 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:40:48.802737  190834 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4168.pem /etc/ssl/certs/51391683.0
	I0110 02:40:48.809639  190834 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:40:48.816571  190834 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:40:48.823951  190834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:40:48.827510  190834 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:40:48.827569  190834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:40:48.869863  190834 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:40:48.877210  190834 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41682.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:40:48.884193  190834 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:40:48.887853  190834 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:40:48.887906  190834 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-038359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-038359 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:40:48.887977  190834 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:40:48.888044  190834 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:40:48.913860  190834 cri.go:96] found id: ""
	I0110 02:40:48.913981  190834 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:40:48.923939  190834 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:40:48.932369  190834 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:40:48.932466  190834 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:40:48.941319  190834 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:40:48.941339  190834 kubeadm.go:158] found existing configuration files:
	
	I0110 02:40:48.941416  190834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:40:48.949286  190834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:40:48.949375  190834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:40:48.957079  190834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:40:48.964885  190834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:40:48.964979  190834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:40:48.972629  190834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:40:48.981262  190834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:40:48.981359  190834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:40:48.988428  190834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:40:48.995921  190834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:40:48.995996  190834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:40:49.003106  190834 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:40:49.046349  190834 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:40:49.046622  190834 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:40:49.114730  190834 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:40:49.114907  190834 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:40:49.114977  190834 kubeadm.go:319] OS: Linux
	I0110 02:40:49.115060  190834 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:40:49.115141  190834 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:40:49.115221  190834 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:40:49.115301  190834 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:40:49.115381  190834 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:40:49.115462  190834 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:40:49.115537  190834 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:40:49.115617  190834 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:40:49.115697  190834 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:40:49.177733  190834 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:40:49.177909  190834 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:40:49.178056  190834 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:40:49.188195  190834 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:40:49.194618  190834 out.go:252]   - Generating certificates and keys ...
	I0110 02:40:49.194770  190834 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:40:49.194874  190834 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:40:49.875900  190834 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:40:49.923218  190834 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:40:50.061052  190834 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 02:40:50.410570  190834 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 02:40:50.526948  190834 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 02:40:50.527171  190834 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-038359 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 02:40:50.766081  190834 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 02:40:50.766334  190834 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-038359 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 02:40:51.190842  190834 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 02:40:51.513288  190834 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 02:40:51.788994  190834 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 02:40:51.789304  190834 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:40:51.886588  190834 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:40:53.601725  190834 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:40:54.134300  190834 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:40:54.202247  190834 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:40:54.436170  190834 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:40:54.436269  190834 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:40:54.436336  190834 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:40:54.443563  190834 out.go:252]   - Booting up control plane ...
	I0110 02:40:54.443705  190834 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:40:54.444096  190834 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:40:54.444168  190834 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:40:54.476232  190834 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:40:54.476345  190834 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:40:54.489373  190834 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:40:54.490448  190834 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:40:54.490858  190834 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:40:54.654854  190834 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:40:54.654975  190834 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:44:54.655845  190834 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001043276s
	I0110 02:44:54.655885  190834 kubeadm.go:319] 
	I0110 02:44:54.656000  190834 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 02:44:54.656061  190834 kubeadm.go:319] 	- The kubelet is not running
	I0110 02:44:54.656378  190834 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 02:44:54.656386  190834 kubeadm.go:319] 
	I0110 02:44:54.656597  190834 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 02:44:54.656866  190834 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 02:44:54.656937  190834 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 02:44:54.656946  190834 kubeadm.go:319] 
	I0110 02:44:54.661702  190834 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:44:54.662211  190834 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:44:54.662360  190834 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:44:54.662749  190834 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 02:44:54.662765  190834 kubeadm.go:319] 
	I0110 02:44:54.662915  190834 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0110 02:44:54.663023  190834 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-038359 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-038359 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001043276s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-038359 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-038359 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001043276s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0110 02:44:54.663140  190834 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0110 02:44:55.075851  190834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:44:55.089502  190834 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:44:55.089569  190834 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:44:55.098023  190834 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:44:55.098042  190834 kubeadm.go:158] found existing configuration files:
	
	I0110 02:44:55.098097  190834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:44:55.106510  190834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:44:55.106629  190834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:44:55.114942  190834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:44:55.123295  190834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:44:55.123364  190834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:44:55.133325  190834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:44:55.141513  190834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:44:55.141578  190834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:44:55.150542  190834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:44:55.160255  190834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:44:55.160330  190834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:44:55.168655  190834 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:44:55.214853  190834 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:44:55.214919  190834 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:44:55.294435  190834 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:44:55.294511  190834 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:44:55.294551  190834 kubeadm.go:319] OS: Linux
	I0110 02:44:55.294601  190834 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:44:55.294652  190834 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:44:55.294703  190834 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:44:55.294755  190834 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:44:55.294805  190834 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:44:55.294860  190834 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:44:55.294912  190834 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:44:55.294963  190834 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:44:55.295013  190834 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:44:55.365460  190834 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:44:55.365574  190834 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:44:55.365671  190834 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:44:55.373470  190834 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:44:55.376963  190834 out.go:252]   - Generating certificates and keys ...
	I0110 02:44:55.377057  190834 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:44:55.377127  190834 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:44:55.377207  190834 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0110 02:44:55.377272  190834 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0110 02:44:55.377346  190834 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0110 02:44:55.377582  190834 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0110 02:44:55.377663  190834 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0110 02:44:55.377995  190834 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0110 02:44:55.378428  190834 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0110 02:44:55.379000  190834 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0110 02:44:55.379288  190834 kubeadm.go:319] [certs] Using the existing "sa" key
	I0110 02:44:55.379356  190834 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:44:55.765220  190834 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:44:55.912388  190834 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:44:56.016742  190834 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:44:56.132433  190834 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:44:56.887057  190834 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:44:56.887604  190834 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:44:56.890068  190834 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:44:56.893462  190834 out.go:252]   - Booting up control plane ...
	I0110 02:44:56.893565  190834 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:44:56.893646  190834 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:44:56.893713  190834 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:44:56.909248  190834 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:44:56.909364  190834 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:44:56.917125  190834 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:44:56.917453  190834 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:44:56.917499  190834 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:44:57.059884  190834 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:44:57.060002  190834 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:48:57.060863  190834 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001123817s
	I0110 02:48:57.065379  190834 kubeadm.go:319] 
	I0110 02:48:57.065521  190834 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 02:48:57.065604  190834 kubeadm.go:319] 	- The kubelet is not running
	I0110 02:48:57.065812  190834 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 02:48:57.065826  190834 kubeadm.go:319] 
	I0110 02:48:57.066022  190834 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 02:48:57.066086  190834 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 02:48:57.066161  190834 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 02:48:57.066172  190834 kubeadm.go:319] 
	I0110 02:48:57.067236  190834 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:48:57.068005  190834 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:48:57.068206  190834 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:48:57.068622  190834 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 02:48:57.068635  190834 kubeadm.go:319] 
	I0110 02:48:57.068751  190834 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 02:48:57.068816  190834 kubeadm.go:403] duration metric: took 8m8.180913411s to StartCluster
	I0110 02:48:57.068867  190834 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0110 02:48:57.068936  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 02:48:57.098192  190834 cri.go:96] found id: ""
	I0110 02:48:57.098234  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.098243  190834 logs.go:284] No container was found matching "kube-apiserver"
	I0110 02:48:57.098252  190834 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0110 02:48:57.098315  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 02:48:57.125219  190834 cri.go:96] found id: ""
	I0110 02:48:57.125247  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.125261  190834 logs.go:284] No container was found matching "etcd"
	I0110 02:48:57.125268  190834 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0110 02:48:57.125342  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 02:48:57.150139  190834 cri.go:96] found id: ""
	I0110 02:48:57.150167  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.150180  190834 logs.go:284] No container was found matching "coredns"
	I0110 02:48:57.150188  190834 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0110 02:48:57.150254  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 02:48:57.175259  190834 cri.go:96] found id: ""
	I0110 02:48:57.175284  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.175294  190834 logs.go:284] No container was found matching "kube-scheduler"
	I0110 02:48:57.175300  190834 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0110 02:48:57.175355  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 02:48:57.200932  190834 cri.go:96] found id: ""
	I0110 02:48:57.200955  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.200965  190834 logs.go:284] No container was found matching "kube-proxy"
	I0110 02:48:57.200988  190834 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 02:48:57.201068  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 02:48:57.227348  190834 cri.go:96] found id: ""
	I0110 02:48:57.227374  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.227383  190834 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 02:48:57.227390  190834 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0110 02:48:57.227445  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 02:48:57.253778  190834 cri.go:96] found id: ""
	I0110 02:48:57.253801  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.253810  190834 logs.go:284] No container was found matching "kindnet"
	I0110 02:48:57.253847  190834 logs.go:123] Gathering logs for container status ...
	I0110 02:48:57.253865  190834 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0110 02:48:57.291511  190834 logs.go:123] Gathering logs for kubelet ...
	I0110 02:48:57.291541  190834 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 02:48:57.388786  190834 logs.go:123] Gathering logs for dmesg ...
	I0110 02:48:57.388823  190834 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0110 02:48:57.407987  190834 logs.go:123] Gathering logs for describe nodes ...
	I0110 02:48:57.408116  190834 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 02:48:57.484131  190834 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 02:48:57.474928    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.475936    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.477614    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.478171    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.479931    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 02:48:57.474928    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.475936    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.477614    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.478171    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.479931    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0110 02:48:57.484205  190834 logs.go:123] Gathering logs for CRI-O ...
	I0110 02:48:57.484232  190834 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0110 02:48:57.522887  190834 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001123817s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 02:48:57.522988  190834 out.go:285] * 
	* 
	W0110 02:48:57.523068  190834 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001123817s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001123817s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:48:57.523136  190834 out.go:285] * 
	* 
	W0110 02:48:57.523415  190834 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:48:57.529625  190834 out.go:203] 
	W0110 02:48:57.533667  190834 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001123817s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001123817s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:48:57.533807  190834 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 02:48:57.533861  190834 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 02:48:57.537476  190834 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-038359 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 109
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-038359 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2026-01-10 02:48:57.956483351 +0000 UTC m=+3321.456466731
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-038359
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-038359:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4c3133d1622ab16d6f792717445cefe393febaf6fd2923c313a89bd257ec66c",
	        "Created": "2026-01-10T02:40:39.892199793Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 191262,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:40:39.96100536Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/c4c3133d1622ab16d6f792717445cefe393febaf6fd2923c313a89bd257ec66c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4c3133d1622ab16d6f792717445cefe393febaf6fd2923c313a89bd257ec66c/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4c3133d1622ab16d6f792717445cefe393febaf6fd2923c313a89bd257ec66c/hosts",
	        "LogPath": "/var/lib/docker/containers/c4c3133d1622ab16d6f792717445cefe393febaf6fd2923c313a89bd257ec66c/c4c3133d1622ab16d6f792717445cefe393febaf6fd2923c313a89bd257ec66c-json.log",
	        "Name": "/force-systemd-flag-038359",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-038359:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-038359",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4c3133d1622ab16d6f792717445cefe393febaf6fd2923c313a89bd257ec66c",
	                "LowerDir": "/var/lib/docker/overlay2/b4e35b3b4e1185c4f265ae078159458bd467ee7e26db79a8b19623198b16f366-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b4e35b3b4e1185c4f265ae078159458bd467ee7e26db79a8b19623198b16f366/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b4e35b3b4e1185c4f265ae078159458bd467ee7e26db79a8b19623198b16f366/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b4e35b3b4e1185c4f265ae078159458bd467ee7e26db79a8b19623198b16f366/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-038359",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-038359/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-038359",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-038359",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-038359",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5f03014b323f89c9d52a64d795516df8401da296eba3579b63003740685548d8",
	            "SandboxKey": "/var/run/docker/netns/5f03014b323f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33038"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33039"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33042"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33040"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33041"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-038359": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:b9:4a:ed:9b:8f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2d7e3a40384a11e2e8b2bec7f11d424f2ad1d96b00d4dec678b81b667230c351",
	                    "EndpointID": "a591b2e826fde6d8984299f38d330b608af549a65e8c1c58613ca2cfefd920ad",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-038359",
	                        "c4c3133d1622"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-038359 -n force-systemd-flag-038359
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-038359 -n force-systemd-flag-038359: exit status 6 (394.146853ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 02:48:58.361261  220471 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-038359" does not appear in /home/jenkins/minikube-integration/22414-2353/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-038359 logs -n 25
helpers_test.go:261: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p old-k8s-version-736081 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-736081 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:42 UTC │
	│ start   │ -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:43 UTC │
	│ image   │ old-k8s-version-736081 image list --format=json                                                                                                                                                                                               │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ pause   │ -p old-k8s-version-736081 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │                     │
	│ delete  │ -p old-k8s-version-736081                                                                                                                                                                                                                     │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ delete  │ -p old-k8s-version-736081                                                                                                                                                                                                                     │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ start   │ -p embed-certs-290628 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ addons  │ enable metrics-server -p embed-certs-290628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │                     │
	│ stop    │ -p embed-certs-290628 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:45 UTC │
	│ addons  │ enable dashboard -p embed-certs-290628 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:45 UTC │
	│ start   │ -p embed-certs-290628 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:46 UTC │
	│ image   │ embed-certs-290628 image list --format=json                                                                                                                                                                                                   │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ pause   │ -p embed-certs-290628 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │                     │
	│ delete  │ -p embed-certs-290628                                                                                                                                                                                                                         │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ delete  │ -p embed-certs-290628                                                                                                                                                                                                                         │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ delete  │ -p disable-driver-mounts-990753                                                                                                                                                                                                               │ disable-driver-mounts-990753 │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ start   │ -p no-preload-676905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:47 UTC │
	│ addons  │ enable metrics-server -p no-preload-676905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │                     │
	│ stop    │ -p no-preload-676905 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:47 UTC │
	│ addons  │ enable dashboard -p no-preload-676905 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:47 UTC │
	│ start   │ -p no-preload-676905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:48 UTC │
	│ image   │ no-preload-676905 image list --format=json                                                                                                                                                                                                    │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │ 10 Jan 26 02:48 UTC │
	│ pause   │ -p no-preload-676905 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │                     │
	│ ssh     │ force-systemd-flag-038359 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-038359    │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │ 10 Jan 26 02:48 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:47:57
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:47:57.263316  217448 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:47:57.263510  217448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:47:57.263536  217448 out.go:374] Setting ErrFile to fd 2...
	I0110 02:47:57.263557  217448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:47:57.263981  217448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:47:57.264472  217448 out.go:368] Setting JSON to false
	I0110 02:47:57.265502  217448 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5427,"bootTime":1768007851,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:47:57.265567  217448 start.go:143] virtualization:  
	I0110 02:47:57.268705  217448 out.go:179] * [no-preload-676905] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:47:57.272500  217448 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:47:57.272569  217448 notify.go:221] Checking for updates...
	I0110 02:47:57.278293  217448 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:47:57.281168  217448 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:47:57.284043  217448 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:47:57.287000  217448 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:47:57.289882  217448 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:47:57.293141  217448 config.go:182] Loaded profile config "no-preload-676905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:47:57.293780  217448 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:47:57.323990  217448 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:47:57.324118  217448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:47:57.375338  217448 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:47:57.3658075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:47:57.375435  217448 docker.go:319] overlay module found
	I0110 02:47:57.378493  217448 out.go:179] * Using the docker driver based on existing profile
	I0110 02:47:57.381261  217448 start.go:309] selected driver: docker
	I0110 02:47:57.381281  217448 start.go:928] validating driver "docker" against &{Name:no-preload-676905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-676905 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:47:57.381369  217448 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:47:57.382058  217448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:47:57.455211  217448 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:47:57.443136408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:47:57.455533  217448 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:47:57.455571  217448 cni.go:84] Creating CNI manager for ""
	I0110 02:47:57.455630  217448 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:47:57.455669  217448 start.go:353] cluster config:
	{Name:no-preload-676905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-676905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:47:57.459035  217448 out.go:179] * Starting "no-preload-676905" primary control-plane node in "no-preload-676905" cluster
	I0110 02:47:57.461914  217448 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:47:57.464941  217448 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:47:57.467765  217448 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:47:57.467915  217448 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/config.json ...
	I0110 02:47:57.468285  217448 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:47:57.468247  217448 cache.go:107] acquiring lock: {Name:mkdf2b70dc3bfb0100a8d957c112ff6d60b533f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:47:57.468554  217448 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0110 02:47:57.468573  217448 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 351.671µs
	I0110 02:47:57.468587  217448 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0110 02:47:57.468604  217448 cache.go:107] acquiring lock: {Name:mk335c7d6e6cec745da4e01893ab73b038bcc37b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:47:57.468641  217448 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I0110 02:47:57.468651  217448 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 50.009µs
	I0110 02:47:57.468657  217448 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I0110 02:47:57.468667  217448 cache.go:107] acquiring lock: {Name:mked65ab4ffae9cf085f87a9b484648d81831c67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:47:57.468697  217448 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I0110 02:47:57.468707  217448 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 41.164µs
	I0110 02:47:57.468713  217448 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I0110 02:47:57.468722  217448 cache.go:107] acquiring lock: {Name:mkd95889d95a369bd71dc1a2761730b686349d74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:47:57.468752  217448 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I0110 02:47:57.468761  217448 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 39.884µs
	I0110 02:47:57.468767  217448 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I0110 02:47:57.468776  217448 cache.go:107] acquiring lock: {Name:mk308c14dc1f570c027c3dfa4b755b4007e7f2d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:47:57.468806  217448 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I0110 02:47:57.468811  217448 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 36.028µs
	I0110 02:47:57.468816  217448 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I0110 02:47:57.468827  217448 cache.go:107] acquiring lock: {Name:mk8489c7600ecf98e77b2d0fd473a4d98a759726 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:47:57.468860  217448 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I0110 02:47:57.468869  217448 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 43.585µs
	I0110 02:47:57.468875  217448 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I0110 02:47:57.468884  217448 cache.go:107] acquiring lock: {Name:mk712a03fba9f53486bb85d78a3ef35c15cedfe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:47:57.468915  217448 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I0110 02:47:57.468924  217448 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.82µs
	I0110 02:47:57.468930  217448 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I0110 02:47:57.468959  217448 cache.go:107] acquiring lock: {Name:mk321022d40fb1eff3edb501792389e1ccf9fc85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:47:57.468991  217448 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I0110 02:47:57.469000  217448 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 42.625µs
	I0110 02:47:57.469006  217448 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I0110 02:47:57.469012  217448 cache.go:87] Successfully saved all images to host disk.
	I0110 02:47:57.487702  217448 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:47:57.487719  217448 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:47:57.487733  217448 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:47:57.487758  217448 start.go:360] acquireMachinesLock for no-preload-676905: {Name:mk2632012d0afb769f32ccada6003bc8dbc8f0e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:47:57.487895  217448 start.go:364] duration metric: took 122.114µs to acquireMachinesLock for "no-preload-676905"
	I0110 02:47:57.487918  217448 start.go:96] Skipping create...Using existing machine configuration
	I0110 02:47:57.487924  217448 fix.go:54] fixHost starting: 
	I0110 02:47:57.488181  217448 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Status}}
	I0110 02:47:57.505038  217448 fix.go:112] recreateIfNeeded on no-preload-676905: state=Stopped err=<nil>
	W0110 02:47:57.505070  217448 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 02:47:57.508413  217448 out.go:252] * Restarting existing docker container for "no-preload-676905" ...
	I0110 02:47:57.508512  217448 cli_runner.go:164] Run: docker start no-preload-676905
	I0110 02:47:57.760354  217448 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Status}}
	I0110 02:47:57.783165  217448 kic.go:430] container "no-preload-676905" state is running.
	I0110 02:47:57.783555  217448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-676905
	I0110 02:47:57.809744  217448 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/config.json ...
	I0110 02:47:57.809957  217448 machine.go:94] provisionDockerMachine start ...
	I0110 02:47:57.810016  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:47:57.833039  217448 main.go:144] libmachine: Using SSH client type: native
	I0110 02:47:57.833357  217448 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I0110 02:47:57.833367  217448 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:47:57.834178  217448 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 02:48:00.983348  217448 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-676905
	
	I0110 02:48:00.983373  217448 ubuntu.go:182] provisioning hostname "no-preload-676905"
	I0110 02:48:00.983446  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:01.001079  217448 main.go:144] libmachine: Using SSH client type: native
	I0110 02:48:01.001404  217448 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I0110 02:48:01.001425  217448 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-676905 && echo "no-preload-676905" | sudo tee /etc/hostname
	I0110 02:48:01.159668  217448 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-676905
	
	I0110 02:48:01.159755  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:01.186779  217448 main.go:144] libmachine: Using SSH client type: native
	I0110 02:48:01.187137  217448 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I0110 02:48:01.187154  217448 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-676905' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-676905/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-676905' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:48:01.349274  217448 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:48:01.349326  217448 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 02:48:01.349363  217448 ubuntu.go:190] setting up certificates
	I0110 02:48:01.349382  217448 provision.go:84] configureAuth start
	I0110 02:48:01.349522  217448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-676905
	I0110 02:48:01.368613  217448 provision.go:143] copyHostCerts
	I0110 02:48:01.368692  217448 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem, removing ...
	I0110 02:48:01.368714  217448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:48:01.368800  217448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 02:48:01.368950  217448 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem, removing ...
	I0110 02:48:01.368963  217448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:48:01.368993  217448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 02:48:01.369064  217448 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem, removing ...
	I0110 02:48:01.369074  217448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:48:01.369099  217448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 02:48:01.369165  217448 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.no-preload-676905 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-676905]
	I0110 02:48:01.486131  217448 provision.go:177] copyRemoteCerts
	I0110 02:48:01.486231  217448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:48:01.486290  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:01.504208  217448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:48:01.613416  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:48:01.631479  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 02:48:01.648624  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:48:01.666546  217448 provision.go:87] duration metric: took 317.139953ms to configureAuth
	I0110 02:48:01.666572  217448 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:48:01.666779  217448 config.go:182] Loaded profile config "no-preload-676905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:48:01.666884  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:01.684254  217448 main.go:144] libmachine: Using SSH client type: native
	I0110 02:48:01.684584  217448 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I0110 02:48:01.684604  217448 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:48:02.053914  217448 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:48:02.053959  217448 machine.go:97] duration metric: took 4.243991003s to provisionDockerMachine
	I0110 02:48:02.053971  217448 start.go:293] postStartSetup for "no-preload-676905" (driver="docker")
	I0110 02:48:02.053983  217448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:48:02.054067  217448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:48:02.054135  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:02.080871  217448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:48:02.183769  217448 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:48:02.187113  217448 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:48:02.187143  217448 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:48:02.187154  217448 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:48:02.187209  217448 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:48:02.187292  217448 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:48:02.187396  217448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:48:02.195320  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:48:02.213827  217448 start.go:296] duration metric: took 159.841212ms for postStartSetup
	I0110 02:48:02.213923  217448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:48:02.213964  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:02.232758  217448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:48:02.332891  217448 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:48:02.337416  217448 fix.go:56] duration metric: took 4.849486026s for fixHost
	I0110 02:48:02.337439  217448 start.go:83] releasing machines lock for "no-preload-676905", held for 4.849532695s
	I0110 02:48:02.337507  217448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-676905
	I0110 02:48:02.354277  217448 ssh_runner.go:195] Run: cat /version.json
	I0110 02:48:02.354325  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:02.354672  217448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:48:02.354732  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:02.375320  217448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:48:02.376183  217448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:48:02.576289  217448 ssh_runner.go:195] Run: systemctl --version
	I0110 02:48:02.582765  217448 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:48:02.618217  217448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:48:02.622377  217448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:48:02.622449  217448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:48:02.630240  217448 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:48:02.630266  217448 start.go:496] detecting cgroup driver to use...
	I0110 02:48:02.630296  217448 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:48:02.630352  217448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:48:02.645549  217448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:48:02.659531  217448 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:48:02.659590  217448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:48:02.676330  217448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:48:02.690833  217448 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:48:02.814271  217448 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:48:02.924511  217448 docker.go:234] disabling docker service ...
	I0110 02:48:02.924573  217448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:48:02.939602  217448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:48:02.952499  217448 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:48:03.065076  217448 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:48:03.175511  217448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:48:03.188649  217448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:48:03.203160  217448 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:48:03.203299  217448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:48:03.212331  217448 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:48:03.212428  217448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:48:03.221610  217448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:48:03.230580  217448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:48:03.239700  217448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:48:03.247572  217448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:48:03.256486  217448 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:48:03.266023  217448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:48:03.275000  217448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:48:03.282924  217448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:48:03.290492  217448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:48:03.409253  217448 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:48:03.598934  217448 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:48:03.599038  217448 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:48:03.603372  217448 start.go:574] Will wait 60s for crictl version
	I0110 02:48:03.603461  217448 ssh_runner.go:195] Run: which crictl
	I0110 02:48:03.607014  217448 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:48:03.631193  217448 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:48:03.631292  217448 ssh_runner.go:195] Run: crio --version
	I0110 02:48:03.660321  217448 ssh_runner.go:195] Run: crio --version
	I0110 02:48:03.694316  217448 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:48:03.697339  217448 cli_runner.go:164] Run: docker network inspect no-preload-676905 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:48:03.713520  217448 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:48:03.717394  217448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:48:03.726925  217448 kubeadm.go:884] updating cluster {Name:no-preload-676905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-676905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:48:03.727035  217448 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:48:03.727084  217448 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:48:03.761764  217448 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:48:03.761788  217448 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:48:03.761796  217448 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 02:48:03.761891  217448 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-676905 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-676905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:48:03.761970  217448 ssh_runner.go:195] Run: crio config
	I0110 02:48:03.833600  217448 cni.go:84] Creating CNI manager for ""
	I0110 02:48:03.833625  217448 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:48:03.833640  217448 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:48:03.833661  217448 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-676905 NodeName:no-preload-676905 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:48:03.833780  217448 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-676905"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:48:03.833859  217448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:48:03.841328  217448 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:48:03.841401  217448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:48:03.849611  217448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 02:48:03.862219  217448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:48:03.874507  217448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I0110 02:48:03.886955  217448 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:48:03.890625  217448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:48:03.900051  217448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:48:04.014846  217448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:48:04.036228  217448 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905 for IP: 192.168.76.2
	I0110 02:48:04.036300  217448 certs.go:195] generating shared ca certs ...
	I0110 02:48:04.036329  217448 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:48:04.036517  217448 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:48:04.036595  217448 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:48:04.036634  217448 certs.go:257] generating profile certs ...
	I0110 02:48:04.036770  217448 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.key
	I0110 02:48:04.036900  217448 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/apiserver.key.9031fc60
	I0110 02:48:04.036996  217448 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/proxy-client.key
	I0110 02:48:04.037158  217448 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:48:04.037216  217448 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:48:04.037242  217448 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:48:04.037302  217448 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:48:04.037367  217448 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:48:04.037420  217448 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:48:04.037525  217448 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:48:04.038173  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:48:04.055833  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:48:04.074084  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:48:04.092187  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:48:04.110857  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 02:48:04.127709  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 02:48:04.144401  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:48:04.169299  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:48:04.209511  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:48:04.236860  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:48:04.264773  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:48:04.285085  217448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:48:04.298451  217448 ssh_runner.go:195] Run: openssl version
	I0110 02:48:04.305608  217448 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:48:04.313349  217448 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:48:04.321024  217448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:48:04.324798  217448 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:48:04.324912  217448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:48:04.366756  217448 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:48:04.374107  217448 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:48:04.381237  217448 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:48:04.389206  217448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:48:04.392683  217448 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:48:04.392743  217448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:48:04.438525  217448 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:48:04.445758  217448 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:48:04.452764  217448 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:48:04.460062  217448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:48:04.466463  217448 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:48:04.466534  217448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:48:04.508659  217448 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:48:04.515844  217448 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:48:04.519416  217448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:48:04.560244  217448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:48:04.601329  217448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:48:04.642136  217448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:48:04.683715  217448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:48:04.728721  217448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:48:04.775453  217448 kubeadm.go:401] StartCluster: {Name:no-preload-676905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-676905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:48:04.775603  217448 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:48:04.775715  217448 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:48:04.843372  217448 cri.go:96] found id: ""
	I0110 02:48:04.843486  217448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:48:04.852517  217448 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:48:04.852586  217448 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:48:04.852669  217448 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:48:04.865524  217448 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:48:04.866016  217448 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-676905" does not appear in /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:48:04.866186  217448 kubeconfig.go:62] /home/jenkins/minikube-integration/22414-2353/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-676905" cluster setting kubeconfig missing "no-preload-676905" context setting]
	I0110 02:48:04.866514  217448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:48:04.869237  217448 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:48:04.877673  217448 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 02:48:04.877705  217448 kubeadm.go:602] duration metric: took 25.101499ms to restartPrimaryControlPlane
	I0110 02:48:04.877715  217448 kubeadm.go:403] duration metric: took 102.273646ms to StartCluster
	I0110 02:48:04.877729  217448 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:48:04.877803  217448 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:48:04.878531  217448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:48:04.878996  217448 config.go:182] Loaded profile config "no-preload-676905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:48:04.879048  217448 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:48:04.879116  217448 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:48:04.879443  217448 addons.go:70] Setting storage-provisioner=true in profile "no-preload-676905"
	I0110 02:48:04.879459  217448 addons.go:239] Setting addon storage-provisioner=true in "no-preload-676905"
	W0110 02:48:04.879474  217448 addons.go:248] addon storage-provisioner should already be in state true
	I0110 02:48:04.879511  217448 host.go:66] Checking if "no-preload-676905" exists ...
	I0110 02:48:04.879758  217448 addons.go:70] Setting dashboard=true in profile "no-preload-676905"
	I0110 02:48:04.879889  217448 addons.go:239] Setting addon dashboard=true in "no-preload-676905"
	W0110 02:48:04.879925  217448 addons.go:248] addon dashboard should already be in state true
	I0110 02:48:04.879964  217448 host.go:66] Checking if "no-preload-676905" exists ...
	I0110 02:48:04.880115  217448 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Status}}
	I0110 02:48:04.880507  217448 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Status}}
	I0110 02:48:04.882729  217448 addons.go:70] Setting default-storageclass=true in profile "no-preload-676905"
	I0110 02:48:04.882753  217448 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-676905"
	I0110 02:48:04.883094  217448 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Status}}
	I0110 02:48:04.885298  217448 out.go:179] * Verifying Kubernetes components...
	I0110 02:48:04.888698  217448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:48:04.939279  217448 addons.go:239] Setting addon default-storageclass=true in "no-preload-676905"
	W0110 02:48:04.939302  217448 addons.go:248] addon default-storageclass should already be in state true
	I0110 02:48:04.939326  217448 host.go:66] Checking if "no-preload-676905" exists ...
	I0110 02:48:04.939724  217448 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Status}}
	I0110 02:48:04.950331  217448 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 02:48:04.950402  217448 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:48:04.954642  217448 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 02:48:04.954753  217448 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:48:04.954768  217448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:48:04.954832  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:04.958452  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 02:48:04.958478  217448 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 02:48:04.958553  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:05.003935  217448 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:48:05.003957  217448 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:48:05.004017  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:05.014631  217448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:48:05.030541  217448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:48:05.035098  217448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:48:05.265198  217448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:48:05.272355  217448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:48:05.304154  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 02:48:05.304179  217448 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 02:48:05.384642  217448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:48:05.396197  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 02:48:05.396224  217448 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 02:48:05.470425  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 02:48:05.470447  217448 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 02:48:05.548628  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 02:48:05.548651  217448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 02:48:05.573085  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 02:48:05.573108  217448 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 02:48:05.592802  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 02:48:05.592826  217448 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 02:48:05.617599  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 02:48:05.617623  217448 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 02:48:05.631905  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 02:48:05.631930  217448 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 02:48:05.653961  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:48:05.653987  217448 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 02:48:05.680573  217448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:48:10.528668  217448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.263381253s)
	I0110 02:48:10.528777  217448 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.256402741s)
	I0110 02:48:10.528837  217448 node_ready.go:35] waiting up to 6m0s for node "no-preload-676905" to be "Ready" ...
	I0110 02:48:10.529207  217448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.144540329s)
	I0110 02:48:10.529353  217448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.848748963s)
	I0110 02:48:10.532856  217448 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-676905 addons enable metrics-server
	
	I0110 02:48:10.547896  217448 node_ready.go:49] node "no-preload-676905" is "Ready"
	I0110 02:48:10.547963  217448 node_ready.go:38] duration metric: took 19.086177ms for node "no-preload-676905" to be "Ready" ...
	I0110 02:48:10.547992  217448 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:48:10.548079  217448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:48:10.559312  217448 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 02:48:10.562176  217448 addons.go:530] duration metric: took 5.683056066s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 02:48:10.563420  217448 api_server.go:72] duration metric: took 5.684341415s to wait for apiserver process to appear ...
	I0110 02:48:10.563472  217448 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:48:10.563505  217448 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:48:10.572962  217448 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:48:10.574110  217448 api_server.go:141] control plane version: v1.35.0
	I0110 02:48:10.574166  217448 api_server.go:131] duration metric: took 10.673052ms to wait for apiserver health ...
	I0110 02:48:10.574189  217448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:48:10.577455  217448 system_pods.go:59] 8 kube-system pods found
	I0110 02:48:10.577496  217448 system_pods.go:61] "coredns-7d764666f9-v67dz" [d212f09c-4573-4e47-ad15-50c9fdfeecd6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:48:10.577508  217448 system_pods.go:61] "etcd-no-preload-676905" [057fbf4d-96fb-423a-8d97-26392378f6a2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:48:10.577515  217448 system_pods.go:61] "kindnet-tsk2v" [d1006bf9-a95f-4260-9048-a78402602ff2] Running
	I0110 02:48:10.577523  217448 system_pods.go:61] "kube-apiserver-no-preload-676905" [9f7e4ecd-63b8-42f7-8ef7-a4ce47c14578] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:48:10.577533  217448 system_pods.go:61] "kube-controller-manager-no-preload-676905" [2baf8a7d-876b-4afe-a381-56e08abc7cc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:48:10.577541  217448 system_pods.go:61] "kube-proxy-r74hc" [0b78d574-77fa-4ec1-b986-d412d22f6a13] Running
	I0110 02:48:10.577548  217448 system_pods.go:61] "kube-scheduler-no-preload-676905" [71e28fec-7176-4dd8-89e0-77b4b1637652] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:48:10.577557  217448 system_pods.go:61] "storage-provisioner" [2867de9b-b16f-4cff-b553-1fb2f52db72b] Running
	I0110 02:48:10.577563  217448 system_pods.go:74] duration metric: took 3.342844ms to wait for pod list to return data ...
	I0110 02:48:10.577571  217448 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:48:10.580372  217448 default_sa.go:45] found service account: "default"
	I0110 02:48:10.580397  217448 default_sa.go:55] duration metric: took 2.817295ms for default service account to be created ...
	I0110 02:48:10.580407  217448 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:48:10.583350  217448 system_pods.go:86] 8 kube-system pods found
	I0110 02:48:10.583387  217448 system_pods.go:89] "coredns-7d764666f9-v67dz" [d212f09c-4573-4e47-ad15-50c9fdfeecd6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:48:10.583397  217448 system_pods.go:89] "etcd-no-preload-676905" [057fbf4d-96fb-423a-8d97-26392378f6a2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:48:10.583403  217448 system_pods.go:89] "kindnet-tsk2v" [d1006bf9-a95f-4260-9048-a78402602ff2] Running
	I0110 02:48:10.583409  217448 system_pods.go:89] "kube-apiserver-no-preload-676905" [9f7e4ecd-63b8-42f7-8ef7-a4ce47c14578] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:48:10.583422  217448 system_pods.go:89] "kube-controller-manager-no-preload-676905" [2baf8a7d-876b-4afe-a381-56e08abc7cc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:48:10.583429  217448 system_pods.go:89] "kube-proxy-r74hc" [0b78d574-77fa-4ec1-b986-d412d22f6a13] Running
	I0110 02:48:10.583438  217448 system_pods.go:89] "kube-scheduler-no-preload-676905" [71e28fec-7176-4dd8-89e0-77b4b1637652] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:48:10.583443  217448 system_pods.go:89] "storage-provisioner" [2867de9b-b16f-4cff-b553-1fb2f52db72b] Running
	I0110 02:48:10.583453  217448 system_pods.go:126] duration metric: took 3.040362ms to wait for k8s-apps to be running ...
	I0110 02:48:10.583464  217448 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:48:10.583515  217448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:48:10.598726  217448 system_svc.go:56] duration metric: took 15.254107ms WaitForService to wait for kubelet
	I0110 02:48:10.598756  217448 kubeadm.go:587] duration metric: took 5.719678497s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:48:10.598773  217448 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:48:10.602655  217448 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:48:10.602730  217448 node_conditions.go:123] node cpu capacity is 2
	I0110 02:48:10.602759  217448 node_conditions.go:105] duration metric: took 3.978724ms to run NodePressure ...
	I0110 02:48:10.602788  217448 start.go:242] waiting for startup goroutines ...
	I0110 02:48:10.602826  217448 start.go:247] waiting for cluster config update ...
	I0110 02:48:10.602850  217448 start.go:256] writing updated cluster config ...
	I0110 02:48:10.603171  217448 ssh_runner.go:195] Run: rm -f paused
	I0110 02:48:10.607990  217448 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:48:10.612672  217448 pod_ready.go:83] waiting for pod "coredns-7d764666f9-v67dz" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 02:48:12.639701  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:15.119461  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:17.623601  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:20.119480  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:22.618312  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:25.118867  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:27.618395  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:29.618516  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:32.118171  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:34.617815  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:36.618190  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:39.117656  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:41.118337  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	I0110 02:48:43.121193  217448 pod_ready.go:94] pod "coredns-7d764666f9-v67dz" is "Ready"
	I0110 02:48:43.121220  217448 pod_ready.go:86] duration metric: took 32.50852248s for pod "coredns-7d764666f9-v67dz" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:43.128158  217448 pod_ready.go:83] waiting for pod "etcd-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:43.132783  217448 pod_ready.go:94] pod "etcd-no-preload-676905" is "Ready"
	I0110 02:48:43.132811  217448 pod_ready.go:86] duration metric: took 4.626469ms for pod "etcd-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:43.135423  217448 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:43.139707  217448 pod_ready.go:94] pod "kube-apiserver-no-preload-676905" is "Ready"
	I0110 02:48:43.139731  217448 pod_ready.go:86] duration metric: took 4.283644ms for pod "kube-apiserver-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:43.141855  217448 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:43.316815  217448 pod_ready.go:94] pod "kube-controller-manager-no-preload-676905" is "Ready"
	I0110 02:48:43.316848  217448 pod_ready.go:86] duration metric: took 174.970319ms for pod "kube-controller-manager-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:43.517346  217448 pod_ready.go:83] waiting for pod "kube-proxy-r74hc" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:43.916413  217448 pod_ready.go:94] pod "kube-proxy-r74hc" is "Ready"
	I0110 02:48:43.916450  217448 pod_ready.go:86] duration metric: took 399.075477ms for pod "kube-proxy-r74hc" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:44.116583  217448 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:44.517355  217448 pod_ready.go:94] pod "kube-scheduler-no-preload-676905" is "Ready"
	I0110 02:48:44.517382  217448 pod_ready.go:86] duration metric: took 400.773648ms for pod "kube-scheduler-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:44.517395  217448 pod_ready.go:40] duration metric: took 33.909369497s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:48:44.571543  217448 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 02:48:44.574929  217448 out.go:203] 
	W0110 02:48:44.577772  217448 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 02:48:44.580829  217448 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:48:44.583727  217448 out.go:179] * Done! kubectl is now configured to use "no-preload-676905" cluster and "default" namespace by default
	I0110 02:48:57.060863  190834 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001123817s
	I0110 02:48:57.065379  190834 kubeadm.go:319] 
	I0110 02:48:57.065521  190834 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 02:48:57.065604  190834 kubeadm.go:319] 	- The kubelet is not running
	I0110 02:48:57.065812  190834 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 02:48:57.065826  190834 kubeadm.go:319] 
	I0110 02:48:57.066022  190834 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 02:48:57.066086  190834 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 02:48:57.066161  190834 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 02:48:57.066172  190834 kubeadm.go:319] 
	I0110 02:48:57.067236  190834 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:48:57.068005  190834 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:48:57.068206  190834 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:48:57.068622  190834 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 02:48:57.068635  190834 kubeadm.go:319] 
	I0110 02:48:57.068751  190834 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 02:48:57.068816  190834 kubeadm.go:403] duration metric: took 8m8.180913411s to StartCluster
	I0110 02:48:57.068867  190834 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0110 02:48:57.068936  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 02:48:57.098192  190834 cri.go:96] found id: ""
	I0110 02:48:57.098234  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.098243  190834 logs.go:284] No container was found matching "kube-apiserver"
	I0110 02:48:57.098252  190834 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0110 02:48:57.098315  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 02:48:57.125219  190834 cri.go:96] found id: ""
	I0110 02:48:57.125247  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.125261  190834 logs.go:284] No container was found matching "etcd"
	I0110 02:48:57.125268  190834 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0110 02:48:57.125342  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 02:48:57.150139  190834 cri.go:96] found id: ""
	I0110 02:48:57.150167  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.150180  190834 logs.go:284] No container was found matching "coredns"
	I0110 02:48:57.150188  190834 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0110 02:48:57.150254  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 02:48:57.175259  190834 cri.go:96] found id: ""
	I0110 02:48:57.175284  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.175294  190834 logs.go:284] No container was found matching "kube-scheduler"
	I0110 02:48:57.175300  190834 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0110 02:48:57.175355  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 02:48:57.200932  190834 cri.go:96] found id: ""
	I0110 02:48:57.200955  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.200965  190834 logs.go:284] No container was found matching "kube-proxy"
	I0110 02:48:57.200988  190834 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 02:48:57.201068  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 02:48:57.227348  190834 cri.go:96] found id: ""
	I0110 02:48:57.227374  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.227383  190834 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 02:48:57.227390  190834 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0110 02:48:57.227445  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 02:48:57.253778  190834 cri.go:96] found id: ""
	I0110 02:48:57.253801  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.253810  190834 logs.go:284] No container was found matching "kindnet"
	I0110 02:48:57.253847  190834 logs.go:123] Gathering logs for container status ...
	I0110 02:48:57.253865  190834 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0110 02:48:57.291511  190834 logs.go:123] Gathering logs for kubelet ...
	I0110 02:48:57.291541  190834 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 02:48:57.388786  190834 logs.go:123] Gathering logs for dmesg ...
	I0110 02:48:57.388823  190834 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0110 02:48:57.407987  190834 logs.go:123] Gathering logs for describe nodes ...
	I0110 02:48:57.408116  190834 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 02:48:57.484131  190834 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 02:48:57.474928    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.475936    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.477614    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.478171    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.479931    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 02:48:57.474928    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.475936    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.477614    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.478171    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.479931    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0110 02:48:57.484205  190834 logs.go:123] Gathering logs for CRI-O ...
	I0110 02:48:57.484232  190834 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0110 02:48:57.522887  190834 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001123817s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 02:48:57.522988  190834 out.go:285] * 
	W0110 02:48:57.523068  190834 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001123817s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:48:57.523136  190834 out.go:285] * 
	W0110 02:48:57.523415  190834 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:48:57.529625  190834 out.go:203] 
	W0110 02:48:57.533667  190834 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001123817s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:48:57.533807  190834 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 02:48:57.533861  190834 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 02:48:57.537476  190834 out.go:203] 
	
	
	==> CRI-O <==
	Jan 10 02:40:46 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:40:46.997021961Z" level=info msg="Registered SIGHUP reload watcher"
	Jan 10 02:40:46 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:40:46.997062739Z" level=info msg="Starting seccomp notifier watcher"
	Jan 10 02:40:46 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:40:46.997139397Z" level=info msg="Create NRI interface"
	Jan 10 02:40:46 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:40:46.997281022Z" level=info msg="built-in NRI default validator is disabled"
	Jan 10 02:40:46 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:40:46.99729794Z" level=info msg="runtime interface created"
	Jan 10 02:40:46 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:40:46.997309206Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Jan 10 02:40:46 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:40:46.997319577Z" level=info msg="runtime interface starting up..."
	Jan 10 02:40:46 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:40:46.997325402Z" level=info msg="starting plugins..."
	Jan 10 02:40:46 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:40:46.997337217Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Jan 10 02:40:46 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:40:46.997411726Z" level=info msg="No systemd watchdog enabled"
	Jan 10 02:40:47 force-systemd-flag-038359 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Jan 10 02:40:49 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:40:49.181070838Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=868a915e-93cf-48f7-9a0a-fa6028be7456 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:40:49 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:40:49.181751319Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=599e3dc1-e839-4096-ac39-1a168dc94c6f name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:40:49 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:40:49.182262508Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=3efdac21-4741-42e7-9e31-5eefdb430fdb name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:40:49 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:40:49.182692043Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=74faa0e8-7838-4b28-b25c-24479967bc7b name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:40:49 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:40:49.183157006Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=ced7bb87-3c47-431a-bf2c-75ed7c942ebe name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:40:49 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:40:49.18356195Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=39391e88-fcf2-4329-b873-76ec150a2f73 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:40:49 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:40:49.184151562Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=f8e3af97-44bc-4c51-8a7e-f0b6d376e05e name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:44:55 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:44:55.369077145Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=31e55788-2ddc-4b83-a47c-0954048c2302 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:44:55 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:44:55.369983549Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=d16871e1-b993-4d9e-b08f-805739b4535a name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:44:55 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:44:55.370570741Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=c9170659-87c6-45ab-b436-e146326ef903 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:44:55 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:44:55.371214777Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=d4e6a881-1054-4e40-878f-5121e4c0881c name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:44:55 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:44:55.371655832Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=1d3ecd39-5966-4c9c-a6fd-90fc5e4ab70b name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:44:55 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:44:55.372155601Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=4a8be6b8-abbc-4b5e-83f9-3da0287d5fe9 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:44:55 force-systemd-flag-038359 crio[839]: time="2026-01-10T02:44:55.372546499Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=cefc1528-0ab4-4953-8dd6-3e0860c37c8d name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 02:48:59.119135    5037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:59.120396    5037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:59.122190    5037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:59.122550    5037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:59.124089    5037 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +27.765975] overlayfs: idmapped layers are currently not supported
	[Jan10 02:15] overlayfs: idmapped layers are currently not supported
	[Jan10 02:16] overlayfs: idmapped layers are currently not supported
	[Jan10 02:17] overlayfs: idmapped layers are currently not supported
	[Jan10 02:18] overlayfs: idmapped layers are currently not supported
	[ +23.212409] overlayfs: idmapped layers are currently not supported
	[  +9.866169] overlayfs: idmapped layers are currently not supported
	[Jan10 02:19] overlayfs: idmapped layers are currently not supported
	[Jan10 02:20] overlayfs: idmapped layers are currently not supported
	[ +35.960240] overlayfs: idmapped layers are currently not supported
	[Jan10 02:21] overlayfs: idmapped layers are currently not supported
	[Jan10 02:23] overlayfs: idmapped layers are currently not supported
	[Jan10 02:24] overlayfs: idmapped layers are currently not supported
	[Jan10 02:25] overlayfs: idmapped layers are currently not supported
	[Jan10 02:30] overlayfs: idmapped layers are currently not supported
	[Jan10 02:31] overlayfs: idmapped layers are currently not supported
	[Jan10 02:35] overlayfs: idmapped layers are currently not supported
	[Jan10 02:37] overlayfs: idmapped layers are currently not supported
	[Jan10 02:41] overlayfs: idmapped layers are currently not supported
	[ +37.534021] overlayfs: idmapped layers are currently not supported
	[Jan10 02:43] overlayfs: idmapped layers are currently not supported
	[Jan10 02:44] overlayfs: idmapped layers are currently not supported
	[Jan10 02:45] overlayfs: idmapped layers are currently not supported
	[Jan10 02:46] overlayfs: idmapped layers are currently not supported
	[Jan10 02:48] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 02:48:59 up  1:31,  0 user,  load average: 2.75, 2.25, 1.97
	Linux force-systemd-flag-038359 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Jan 10 02:48:56 force-systemd-flag-038359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:48:56 force-systemd-flag-038359 kubelet[4840]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 02:48:56 force-systemd-flag-038359 kubelet[4840]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 02:48:56 force-systemd-flag-038359 kubelet[4840]: E0110 02:48:56.982960    4840 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 02:48:56 force-systemd-flag-038359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 02:48:56 force-systemd-flag-038359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 02:48:57 force-systemd-flag-038359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 639.
	Jan 10 02:48:57 force-systemd-flag-038359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:48:57 force-systemd-flag-038359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:48:57 force-systemd-flag-038359 kubelet[4927]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 02:48:57 force-systemd-flag-038359 kubelet[4927]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 02:48:57 force-systemd-flag-038359 kubelet[4927]: E0110 02:48:57.747144    4927 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 02:48:57 force-systemd-flag-038359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 02:48:57 force-systemd-flag-038359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 02:48:58 force-systemd-flag-038359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 640.
	Jan 10 02:48:58 force-systemd-flag-038359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:48:58 force-systemd-flag-038359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:48:58 force-systemd-flag-038359 kubelet[4955]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 02:48:58 force-systemd-flag-038359 kubelet[4955]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 02:48:58 force-systemd-flag-038359 kubelet[4955]: E0110 02:48:58.477945    4955 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 02:48:58 force-systemd-flag-038359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 02:48:58 force-systemd-flag-038359 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 02:48:59 force-systemd-flag-038359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 641.
	Jan 10 02:48:59 force-systemd-flag-038359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:48:59 force-systemd-flag-038359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-038359 -n force-systemd-flag-038359
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-038359 -n force-systemd-flag-038359: exit status 6 (445.102444ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 02:48:59.753758  220855 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-038359" does not appear in /home/jenkins/minikube-integration/22414-2353/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-flag-038359" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-flag-038359" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-038359
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-038359: (2.536656275s)
--- FAIL: TestForceSystemdFlag (507.40s)

                                                
                                    
x
+
TestForceSystemdEnv (506.58s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-088457 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0110 02:34:04.938136    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:35:18.347651    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-088457 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 109 (8m22.717569671s)

                                                
                                                
-- stdout --
	* [force-systemd-env-088457] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-088457" primary control-plane node in "force-systemd-env-088457" cluster
	* Pulling base image v0.0.48-1767944074-22401 ...
	* Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:32:29.920850  168670 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:32:29.921055  168670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:32:29.921080  168670 out.go:374] Setting ErrFile to fd 2...
	I0110 02:32:29.921101  168670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:32:29.921477  168670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:32:29.923123  168670 out.go:368] Setting JSON to false
	I0110 02:32:29.924523  168670 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4499,"bootTime":1768007851,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:32:29.924596  168670 start.go:143] virtualization:  
	I0110 02:32:29.928187  168670 out.go:179] * [force-systemd-env-088457] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:32:29.932599  168670 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:32:29.932681  168670 notify.go:221] Checking for updates...
	I0110 02:32:29.939271  168670 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:32:29.942372  168670 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:32:29.945355  168670 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:32:29.948374  168670 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:32:29.951319  168670 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I0110 02:32:29.954736  168670 config.go:182] Loaded profile config "running-upgrade-970119": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0110 02:32:29.954833  168670 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:32:29.978445  168670 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:32:29.978551  168670 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:32:30.074260  168670 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:32:30.057722111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:32:30.074392  168670 docker.go:319] overlay module found
	I0110 02:32:30.077641  168670 out.go:179] * Using the docker driver based on user configuration
	I0110 02:32:30.080574  168670 start.go:309] selected driver: docker
	I0110 02:32:30.080611  168670 start.go:928] validating driver "docker" against <nil>
	I0110 02:32:30.080632  168670 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:32:30.081538  168670 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:32:30.177376  168670 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:32:30.16810112 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:32:30.177530  168670 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 02:32:30.177761  168670 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 02:32:30.180870  168670 out.go:179] * Using Docker driver with root privileges
	I0110 02:32:30.184831  168670 cni.go:84] Creating CNI manager for ""
	I0110 02:32:30.184937  168670 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:32:30.184955  168670 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 02:32:30.185041  168670 start.go:353] cluster config:
	{Name:force-systemd-env-088457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-088457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:32:30.188197  168670 out.go:179] * Starting "force-systemd-env-088457" primary control-plane node in "force-systemd-env-088457" cluster
	I0110 02:32:30.191061  168670 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:32:30.194079  168670 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:32:30.196886  168670 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:32:30.196935  168670 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 02:32:30.196956  168670 cache.go:65] Caching tarball of preloaded images
	I0110 02:32:30.196967  168670 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:32:30.197042  168670 preload.go:251] Found /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 02:32:30.197051  168670 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:32:30.197155  168670 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/config.json ...
	I0110 02:32:30.197172  168670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/config.json: {Name:mkc6b7577fe261e7edf18842afbd7e8de20aaa80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:32:30.217097  168670 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:32:30.217120  168670 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:32:30.217141  168670 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:32:30.217175  168670 start.go:360] acquireMachinesLock for force-systemd-env-088457: {Name:mka21724f7c5d2a1e5491330499a43e4c0b9b7d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:32:30.217302  168670 start.go:364] duration metric: took 103.398µs to acquireMachinesLock for "force-systemd-env-088457"
	I0110 02:32:30.217333  168670 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-088457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-088457 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:32:30.217409  168670 start.go:125] createHost starting for "" (driver="docker")
	I0110 02:32:30.220755  168670 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:32:30.220995  168670 start.go:159] libmachine.API.Create for "force-systemd-env-088457" (driver="docker")
	I0110 02:32:30.221035  168670 client.go:173] LocalClient.Create starting
	I0110 02:32:30.221110  168670 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem
	I0110 02:32:30.221153  168670 main.go:144] libmachine: Decoding PEM data...
	I0110 02:32:30.221175  168670 main.go:144] libmachine: Parsing certificate...
	I0110 02:32:30.221235  168670 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem
	I0110 02:32:30.221257  168670 main.go:144] libmachine: Decoding PEM data...
	I0110 02:32:30.221272  168670 main.go:144] libmachine: Parsing certificate...
	I0110 02:32:30.221671  168670 cli_runner.go:164] Run: docker network inspect force-systemd-env-088457 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:32:30.238069  168670 cli_runner.go:211] docker network inspect force-systemd-env-088457 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:32:30.238153  168670 network_create.go:284] running [docker network inspect force-systemd-env-088457] to gather additional debugging logs...
	I0110 02:32:30.238176  168670 cli_runner.go:164] Run: docker network inspect force-systemd-env-088457
	W0110 02:32:30.256697  168670 cli_runner.go:211] docker network inspect force-systemd-env-088457 returned with exit code 1
	I0110 02:32:30.256731  168670 network_create.go:287] error running [docker network inspect force-systemd-env-088457]: docker network inspect force-systemd-env-088457: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-088457 not found
	I0110 02:32:30.256746  168670 network_create.go:289] output of [docker network inspect force-systemd-env-088457]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-088457 not found
	
	** /stderr **
	I0110 02:32:30.256839  168670 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:32:30.272521  168670 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dba69832168e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:28:cf:b8:5c:6c} reservation:<nil>}
	I0110 02:32:30.272797  168670 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1bcca9209747 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d2:8b:83:a4:43:ae} reservation:<nil>}
	I0110 02:32:30.273108  168670 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-383ddfacc8f8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:26:ff:fb:97:66:af} reservation:<nil>}
	I0110 02:32:30.273524  168670 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a6ab10}
	I0110 02:32:30.273546  168670 network_create.go:124] attempt to create docker network force-systemd-env-088457 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0110 02:32:30.273614  168670 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-088457 force-systemd-env-088457
	I0110 02:32:30.334271  168670 network_create.go:108] docker network force-systemd-env-088457 192.168.76.0/24 created
	I0110 02:32:30.334314  168670 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-env-088457" container
	I0110 02:32:30.334385  168670 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:32:30.350672  168670 cli_runner.go:164] Run: docker volume create force-systemd-env-088457 --label name.minikube.sigs.k8s.io=force-systemd-env-088457 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:32:30.367951  168670 oci.go:103] Successfully created a docker volume force-systemd-env-088457
	I0110 02:32:30.368031  168670 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-088457-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-088457 --entrypoint /usr/bin/test -v force-systemd-env-088457:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:32:30.911273  168670 oci.go:107] Successfully prepared a docker volume force-systemd-env-088457
	I0110 02:32:30.911342  168670 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:32:30.911354  168670 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:32:30.911421  168670 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-088457:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:32:34.979771  168670 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-088457:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (4.068312533s)
	I0110 02:32:34.979827  168670 kic.go:203] duration metric: took 4.068469697s to extract preloaded images to volume ...
	W0110 02:32:34.979971  168670 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 02:32:34.980088  168670 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:32:35.039903  168670 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-088457 --name force-systemd-env-088457 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-088457 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-088457 --network force-systemd-env-088457 --ip 192.168.76.2 --volume force-systemd-env-088457:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 02:32:35.378864  168670 cli_runner.go:164] Run: docker container inspect force-systemd-env-088457 --format={{.State.Running}}
	I0110 02:32:35.407203  168670 cli_runner.go:164] Run: docker container inspect force-systemd-env-088457 --format={{.State.Status}}
	I0110 02:32:35.434763  168670 cli_runner.go:164] Run: docker exec force-systemd-env-088457 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:32:35.493332  168670 oci.go:144] the created container "force-systemd-env-088457" has a running status.
	I0110 02:32:35.493359  168670 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-env-088457/id_rsa...
	I0110 02:32:36.573071  168670 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-env-088457/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0110 02:32:36.573114  168670 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-env-088457/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:32:36.592307  168670 cli_runner.go:164] Run: docker container inspect force-systemd-env-088457 --format={{.State.Status}}
	I0110 02:32:36.613389  168670 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:32:36.613411  168670 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-088457 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:32:36.683567  168670 cli_runner.go:164] Run: docker container inspect force-systemd-env-088457 --format={{.State.Status}}
	I0110 02:32:36.701523  168670 machine.go:94] provisionDockerMachine start ...
	I0110 02:32:36.701604  168670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-088457
	I0110 02:32:36.719556  168670 main.go:144] libmachine: Using SSH client type: native
	I0110 02:32:36.720052  168670 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I0110 02:32:36.720070  168670 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:32:36.720743  168670 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 02:32:39.867210  168670 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-088457
	
	I0110 02:32:39.867236  168670 ubuntu.go:182] provisioning hostname "force-systemd-env-088457"
	I0110 02:32:39.867308  168670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-088457
	I0110 02:32:39.884720  168670 main.go:144] libmachine: Using SSH client type: native
	I0110 02:32:39.885040  168670 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I0110 02:32:39.885057  168670 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-088457 && echo "force-systemd-env-088457" | sudo tee /etc/hostname
	I0110 02:32:40.049859  168670 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-env-088457
	
	I0110 02:32:40.049958  168670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-088457
	I0110 02:32:40.078651  168670 main.go:144] libmachine: Using SSH client type: native
	I0110 02:32:40.078986  168670 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I0110 02:32:40.079008  168670 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-088457' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-088457/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-088457' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:32:40.232160  168670 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:32:40.232183  168670 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 02:32:40.232221  168670 ubuntu.go:190] setting up certificates
	I0110 02:32:40.232231  168670 provision.go:84] configureAuth start
	I0110 02:32:40.232290  168670 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-088457
	I0110 02:32:40.248638  168670 provision.go:143] copyHostCerts
	I0110 02:32:40.248684  168670 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:32:40.248716  168670 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem, removing ...
	I0110 02:32:40.248729  168670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:32:40.248802  168670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 02:32:40.248885  168670 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:32:40.248909  168670 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem, removing ...
	I0110 02:32:40.248917  168670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:32:40.248945  168670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 02:32:40.248995  168670 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:32:40.249011  168670 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem, removing ...
	I0110 02:32:40.249020  168670 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:32:40.249045  168670 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 02:32:40.249094  168670 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-088457 san=[127.0.0.1 192.168.76.2 force-systemd-env-088457 localhost minikube]
	I0110 02:32:40.341788  168670 provision.go:177] copyRemoteCerts
	I0110 02:32:40.341864  168670 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:32:40.341930  168670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-088457
	I0110 02:32:40.360370  168670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-env-088457/id_rsa Username:docker}
	I0110 02:32:40.462954  168670 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0110 02:32:40.463017  168670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:32:40.479360  168670 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0110 02:32:40.479459  168670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0110 02:32:40.502577  168670 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0110 02:32:40.502634  168670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:32:40.518750  168670 provision.go:87] duration metric: took 286.498527ms to configureAuth
	I0110 02:32:40.518817  168670 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:32:40.519008  168670 config.go:182] Loaded profile config "force-systemd-env-088457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:32:40.519120  168670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-088457
	I0110 02:32:40.538269  168670 main.go:144] libmachine: Using SSH client type: native
	I0110 02:32:40.538593  168670 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33013 <nil> <nil>}
	I0110 02:32:40.538613  168670 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:32:40.855191  168670 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:32:40.855211  168670 machine.go:97] duration metric: took 4.153668899s to provisionDockerMachine
	I0110 02:32:40.855223  168670 client.go:176] duration metric: took 10.634176798s to LocalClient.Create
	I0110 02:32:40.855242  168670 start.go:167] duration metric: took 10.634249239s to libmachine.API.Create "force-systemd-env-088457"
	I0110 02:32:40.855249  168670 start.go:293] postStartSetup for "force-systemd-env-088457" (driver="docker")
	I0110 02:32:40.855268  168670 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:32:40.855330  168670 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:32:40.855373  168670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-088457
	I0110 02:32:40.873031  168670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-env-088457/id_rsa Username:docker}
	I0110 02:32:40.975405  168670 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:32:40.978438  168670 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:32:40.978464  168670 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:32:40.978484  168670 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:32:40.978536  168670 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:32:40.978610  168670 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:32:40.978617  168670 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> /etc/ssl/certs/41682.pem
	I0110 02:32:40.978709  168670 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:32:40.985901  168670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:32:41.002769  168670 start.go:296] duration metric: took 147.506289ms for postStartSetup
	I0110 02:32:41.003119  168670 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-088457
	I0110 02:32:41.020629  168670 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/config.json ...
	I0110 02:32:41.020923  168670 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:32:41.020979  168670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-088457
	I0110 02:32:41.037243  168670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-env-088457/id_rsa Username:docker}
	I0110 02:32:41.136594  168670 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:32:41.140737  168670 start.go:128] duration metric: took 10.923313966s to createHost
	I0110 02:32:41.140758  168670 start.go:83] releasing machines lock for "force-systemd-env-088457", held for 10.923442413s
	I0110 02:32:41.140821  168670 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-088457
	I0110 02:32:41.157599  168670 ssh_runner.go:195] Run: cat /version.json
	I0110 02:32:41.157647  168670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-088457
	I0110 02:32:41.157893  168670 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:32:41.157942  168670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-088457
	I0110 02:32:41.180896  168670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-env-088457/id_rsa Username:docker}
	I0110 02:32:41.181434  168670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33013 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-env-088457/id_rsa Username:docker}
	I0110 02:32:41.386917  168670 ssh_runner.go:195] Run: systemctl --version
	I0110 02:32:41.393234  168670 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:32:41.430731  168670 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:32:41.434897  168670 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:32:41.434967  168670 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:32:41.462990  168670 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 02:32:41.463013  168670 start.go:496] detecting cgroup driver to use...
	I0110 02:32:41.463050  168670 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 02:32:41.463108  168670 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:32:41.480928  168670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:32:41.493663  168670 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:32:41.493748  168670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:32:41.513961  168670 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:32:41.532895  168670 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:32:41.642639  168670 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:32:41.771094  168670 docker.go:234] disabling docker service ...
	I0110 02:32:41.771157  168670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:32:41.793886  168670 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:32:41.806927  168670 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:32:41.929036  168670 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:32:42.055955  168670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:32:42.070297  168670 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:32:42.088532  168670 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:32:42.088650  168670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:32:42.102345  168670 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 02:32:42.102438  168670 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:32:42.120897  168670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:32:42.144480  168670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:32:42.160853  168670 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:32:42.174545  168670 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:32:42.186740  168670 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:32:42.205860  168670 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:32:42.217559  168670 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:32:42.228295  168670 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:32:42.239064  168670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:32:42.368171  168670 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:32:42.522169  168670 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:32:42.522265  168670 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:32:42.526032  168670 start.go:574] Will wait 60s for crictl version
	I0110 02:32:42.526106  168670 ssh_runner.go:195] Run: which crictl
	I0110 02:32:42.529486  168670 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:32:42.553633  168670 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:32:42.553733  168670 ssh_runner.go:195] Run: crio --version
	I0110 02:32:42.581754  168670 ssh_runner.go:195] Run: crio --version
	I0110 02:32:42.615010  168670 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:32:42.617805  168670 cli_runner.go:164] Run: docker network inspect force-systemd-env-088457 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:32:42.633803  168670 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:32:42.637560  168670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:32:42.646797  168670 kubeadm.go:884] updating cluster {Name:force-systemd-env-088457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-088457 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:32:42.646907  168670 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:32:42.646976  168670 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:32:42.684426  168670 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:32:42.684451  168670 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:32:42.684508  168670 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:32:42.708307  168670 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:32:42.708331  168670 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:32:42.708338  168670 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 02:32:42.708418  168670 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-088457 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-088457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:32:42.708509  168670 ssh_runner.go:195] Run: crio config
	I0110 02:32:42.778285  168670 cni.go:84] Creating CNI manager for ""
	I0110 02:32:42.778308  168670 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:32:42.778328  168670 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:32:42.778351  168670 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-088457 NodeName:force-systemd-env-088457 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:32:42.778473  168670 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-088457"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:32:42.778554  168670 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:32:42.786283  168670 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:32:42.786394  168670 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:32:42.794583  168670 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0110 02:32:42.807043  168670 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:32:42.819256  168670 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I0110 02:32:42.831616  168670 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:32:42.835030  168670 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:32:42.844463  168670 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:32:42.958849  168670 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:32:42.974913  168670 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457 for IP: 192.168.76.2
	I0110 02:32:42.974934  168670 certs.go:195] generating shared ca certs ...
	I0110 02:32:42.974950  168670 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:32:42.975173  168670 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:32:42.975228  168670 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:32:42.975240  168670 certs.go:257] generating profile certs ...
	I0110 02:32:42.975296  168670 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/client.key
	I0110 02:32:42.975309  168670 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/client.crt with IP's: []
	I0110 02:32:43.206801  168670 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/client.crt ...
	I0110 02:32:43.206840  168670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/client.crt: {Name:mk875ee7796dfbfab04cf1d21cfff8d5ee0ca291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:32:43.207051  168670 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/client.key ...
	I0110 02:32:43.207068  168670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/client.key: {Name:mkbb054eb442a5075e0fa04494bfc8272d306496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:32:43.207162  168670 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/apiserver.key.1922f643
	I0110 02:32:43.207184  168670 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/apiserver.crt.1922f643 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 02:32:43.344577  168670 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/apiserver.crt.1922f643 ...
	I0110 02:32:43.344608  168670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/apiserver.crt.1922f643: {Name:mkd3a11878d1fabdf2ce107bf19110d1dd1d8738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:32:43.344782  168670 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/apiserver.key.1922f643 ...
	I0110 02:32:43.344798  168670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/apiserver.key.1922f643: {Name:mk22b7d61fbc25e582aa75c48c86ccc2fb31aa49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:32:43.344872  168670 certs.go:382] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/apiserver.crt.1922f643 -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/apiserver.crt
	I0110 02:32:43.344961  168670 certs.go:386] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/apiserver.key.1922f643 -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/apiserver.key
	I0110 02:32:43.345018  168670 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/proxy-client.key
	I0110 02:32:43.345038  168670 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/proxy-client.crt with IP's: []
	I0110 02:32:44.011477  168670 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/proxy-client.crt ...
	I0110 02:32:44.011509  168670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/proxy-client.crt: {Name:mke326d51e0002d16d2c44602cee359fb6d3dc52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:32:44.011715  168670 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/proxy-client.key ...
	I0110 02:32:44.011730  168670 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/proxy-client.key: {Name:mk45164d69acfc53c8b683dcc093cc91158e2a61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:32:44.011839  168670 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0110 02:32:44.011862  168670 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0110 02:32:44.011875  168670 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0110 02:32:44.011893  168670 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0110 02:32:44.011916  168670 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0110 02:32:44.011936  168670 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0110 02:32:44.011951  168670 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0110 02:32:44.011963  168670 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0110 02:32:44.012016  168670 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:32:44.012058  168670 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:32:44.012070  168670 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:32:44.012097  168670 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:32:44.012131  168670 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:32:44.012158  168670 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:32:44.012205  168670 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:32:44.012242  168670 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> /usr/share/ca-certificates/41682.pem
	I0110 02:32:44.012255  168670 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:32:44.012266  168670 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem -> /usr/share/ca-certificates/4168.pem
	I0110 02:32:44.012801  168670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:32:44.030647  168670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:32:44.048705  168670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:32:44.066191  168670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:32:44.083687  168670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0110 02:32:44.101197  168670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:32:44.118059  168670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:32:44.134310  168670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-env-088457/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 02:32:44.150435  168670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:32:44.167271  168670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:32:44.184060  168670 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:32:44.200839  168670 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:32:44.213259  168670 ssh_runner.go:195] Run: openssl version
	I0110 02:32:44.219342  168670 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:32:44.226442  168670 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:32:44.233850  168670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:32:44.237386  168670 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:32:44.237452  168670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:32:44.278204  168670 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:32:44.285803  168670 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41682.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:32:44.293088  168670 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:32:44.300693  168670 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:32:44.307974  168670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:32:44.311584  168670 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:32:44.311678  168670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:32:44.353676  168670 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:32:44.361309  168670 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:32:44.368844  168670 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:32:44.376477  168670 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:32:44.384106  168670 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:32:44.387872  168670 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:32:44.387942  168670 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:32:44.429361  168670 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:32:44.436724  168670 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4168.pem /etc/ssl/certs/51391683.0
	I0110 02:32:44.443699  168670 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:32:44.448271  168670 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:32:44.448327  168670 kubeadm.go:401] StartCluster: {Name:force-systemd-env-088457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-env-088457 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:32:44.448411  168670 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:32:44.448487  168670 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:32:44.482576  168670 cri.go:96] found id: ""
	I0110 02:32:44.482654  168670 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:32:44.490589  168670 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:32:44.500620  168670 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:32:44.500684  168670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:32:44.509827  168670 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:32:44.509848  168670 kubeadm.go:158] found existing configuration files:
	
	I0110 02:32:44.509899  168670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:32:44.517760  168670 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:32:44.517837  168670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:32:44.525256  168670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:32:44.533025  168670 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:32:44.533132  168670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:32:44.540557  168670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:32:44.548283  168670 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:32:44.548359  168670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:32:44.555771  168670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:32:44.563646  168670 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:32:44.563734  168670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:32:44.571185  168670 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:32:44.686223  168670 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:32:44.686657  168670 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:32:44.749407  168670 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:36:48.918843  168670 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I0110 02:36:48.918881  168670 kubeadm.go:319] 
	I0110 02:36:48.919106  168670 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 02:36:48.919214  168670 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:36:48.919287  168670 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:36:48.919719  168670 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:36:48.919837  168670 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:36:48.919899  168670 kubeadm.go:319] OS: Linux
	I0110 02:36:48.919977  168670 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:36:48.920064  168670 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:36:48.920149  168670 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:36:48.920233  168670 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:36:48.920319  168670 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:36:48.920402  168670 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:36:48.920716  168670 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:36:48.920800  168670 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:36:48.920882  168670 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:36:48.921011  168670 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:36:48.921182  168670 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:36:48.921342  168670 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:36:48.921760  168670 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:36:48.927418  168670 out.go:252]   - Generating certificates and keys ...
	I0110 02:36:48.927588  168670 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:36:48.927707  168670 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:36:48.927883  168670 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:36:48.927990  168670 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:36:48.928089  168670 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 02:36:48.928180  168670 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 02:36:48.928281  168670 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 02:36:48.928461  168670 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-088457 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:36:48.928559  168670 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 02:36:48.928732  168670 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-088457 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:36:48.928837  168670 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 02:36:48.928945  168670 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 02:36:48.929028  168670 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 02:36:48.929119  168670 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:36:48.929206  168670 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:36:48.929306  168670 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:36:48.929393  168670 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:36:48.929491  168670 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:36:48.929585  168670 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:36:48.929714  168670 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:36:48.929822  168670 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:36:48.932817  168670 out.go:252]   - Booting up control plane ...
	I0110 02:36:48.932987  168670 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:36:48.933110  168670 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:36:48.933220  168670 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:36:48.933357  168670 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:36:48.935955  168670 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:36:48.936081  168670 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:36:48.936169  168670 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:36:48.936211  168670 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:36:48.936346  168670 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:36:48.936454  168670 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:36:48.936522  168670 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000997477s
	I0110 02:36:48.936529  168670 kubeadm.go:319] 
	I0110 02:36:48.936586  168670 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 02:36:48.936622  168670 kubeadm.go:319] 	- The kubelet is not running
	I0110 02:36:48.936729  168670 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 02:36:48.936737  168670 kubeadm.go:319] 
	I0110 02:36:48.936842  168670 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 02:36:48.936877  168670 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 02:36:48.936911  168670 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 02:36:48.936922  168670 kubeadm.go:319] 
	W0110 02:36:48.937065  168670 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-088457 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-088457 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000997477s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-088457 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-088457 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000997477s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0110 02:36:48.937144  168670 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0110 02:36:49.364561  168670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:36:49.378316  168670 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:36:49.378378  168670 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:36:49.388289  168670 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:36:49.388310  168670 kubeadm.go:158] found existing configuration files:
	
	I0110 02:36:49.388366  168670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:36:49.396627  168670 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:36:49.396699  168670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:36:49.404037  168670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:36:49.412784  168670 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:36:49.412847  168670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:36:49.420635  168670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:36:49.429547  168670 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:36:49.429607  168670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:36:49.436892  168670 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:36:49.444137  168670 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:36:49.444195  168670 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:36:49.451091  168670 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:36:49.504669  168670 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:36:49.505194  168670 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:36:49.606193  168670 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:36:49.606267  168670 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:36:49.606308  168670 kubeadm.go:319] OS: Linux
	I0110 02:36:49.606357  168670 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:36:49.606409  168670 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:36:49.606459  168670 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:36:49.606525  168670 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:36:49.606585  168670 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:36:49.606638  168670 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:36:49.606694  168670 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:36:49.606746  168670 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:36:49.606794  168670 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:36:49.684047  168670 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:36:49.687242  168670 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:36:49.687374  168670 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:36:49.699977  168670 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:36:49.703154  168670 out.go:252]   - Generating certificates and keys ...
	I0110 02:36:49.703237  168670 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:36:49.703306  168670 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:36:49.703387  168670 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0110 02:36:49.703451  168670 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0110 02:36:49.703525  168670 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0110 02:36:49.703586  168670 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0110 02:36:49.703652  168670 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0110 02:36:49.703717  168670 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0110 02:36:49.703840  168670 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0110 02:36:49.703918  168670 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0110 02:36:49.703960  168670 kubeadm.go:319] [certs] Using the existing "sa" key
	I0110 02:36:49.704020  168670 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:36:49.871917  168670 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:36:50.189708  168670 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:36:50.738664  168670 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:36:50.815955  168670 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:36:51.827912  168670 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:36:51.828028  168670 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:36:51.833151  168670 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:36:51.836599  168670 out.go:252]   - Booting up control plane ...
	I0110 02:36:51.836703  168670 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:36:51.836781  168670 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:36:51.842482  168670 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:36:51.859404  168670 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:36:51.859514  168670 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:36:51.868190  168670 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:36:51.868290  168670 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:36:51.868330  168670 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:36:52.033370  168670 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:36:52.033491  168670 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:40:52.034344  168670 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001310119s
	I0110 02:40:52.048168  168670 kubeadm.go:319] 
	I0110 02:40:52.048342  168670 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 02:40:52.048425  168670 kubeadm.go:319] 	- The kubelet is not running
	I0110 02:40:52.048586  168670 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 02:40:52.048637  168670 kubeadm.go:319] 
	I0110 02:40:52.048805  168670 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 02:40:52.048886  168670 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 02:40:52.048956  168670 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 02:40:52.048980  168670 kubeadm.go:319] 
	I0110 02:40:52.050910  168670 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:40:52.051435  168670 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:40:52.051608  168670 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:40:52.051888  168670 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 02:40:52.051903  168670 kubeadm.go:319] 
	I0110 02:40:52.052028  168670 kubeadm.go:403] duration metric: took 8m7.603706251s to StartCluster
	I0110 02:40:52.052063  168670 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0110 02:40:52.052128  168670 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 02:40:52.052282  168670 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 02:40:52.092643  168670 cri.go:96] found id: ""
	I0110 02:40:52.092677  168670 logs.go:282] 0 containers: []
	W0110 02:40:52.092686  168670 logs.go:284] No container was found matching "kube-apiserver"
	I0110 02:40:52.092693  168670 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0110 02:40:52.092751  168670 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 02:40:52.135162  168670 cri.go:96] found id: ""
	I0110 02:40:52.135182  168670 logs.go:282] 0 containers: []
	W0110 02:40:52.135191  168670 logs.go:284] No container was found matching "etcd"
	I0110 02:40:52.135198  168670 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0110 02:40:52.135261  168670 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 02:40:52.166248  168670 cri.go:96] found id: ""
	I0110 02:40:52.166269  168670 logs.go:282] 0 containers: []
	W0110 02:40:52.166277  168670 logs.go:284] No container was found matching "coredns"
	I0110 02:40:52.166284  168670 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0110 02:40:52.166340  168670 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 02:40:52.197395  168670 cri.go:96] found id: ""
	I0110 02:40:52.197416  168670 logs.go:282] 0 containers: []
	W0110 02:40:52.197424  168670 logs.go:284] No container was found matching "kube-scheduler"
	I0110 02:40:52.197430  168670 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0110 02:40:52.197486  168670 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 02:40:52.226595  168670 cri.go:96] found id: ""
	I0110 02:40:52.226615  168670 logs.go:282] 0 containers: []
	W0110 02:40:52.226624  168670 logs.go:284] No container was found matching "kube-proxy"
	I0110 02:40:52.226630  168670 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 02:40:52.226749  168670 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 02:40:52.260867  168670 cri.go:96] found id: ""
	I0110 02:40:52.260940  168670 logs.go:282] 0 containers: []
	W0110 02:40:52.260962  168670 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 02:40:52.260985  168670 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0110 02:40:52.261079  168670 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 02:40:52.298346  168670 cri.go:96] found id: ""
	I0110 02:40:52.298409  168670 logs.go:282] 0 containers: []
	W0110 02:40:52.298440  168670 logs.go:284] No container was found matching "kindnet"
	I0110 02:40:52.298463  168670 logs.go:123] Gathering logs for kubelet ...
	I0110 02:40:52.298488  168670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 02:40:52.372735  168670 logs.go:123] Gathering logs for dmesg ...
	I0110 02:40:52.372815  168670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0110 02:40:52.387731  168670 logs.go:123] Gathering logs for describe nodes ...
	I0110 02:40:52.387938  168670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 02:40:52.482423  168670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 02:40:52.467146    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:52.468391    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:52.469148    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:52.475926    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:52.476243    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 02:40:52.467146    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:52.468391    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:52.469148    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:52.475926    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:52.476243    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0110 02:40:52.482491  168670 logs.go:123] Gathering logs for CRI-O ...
	I0110 02:40:52.482528  168670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0110 02:40:52.518250  168670 logs.go:123] Gathering logs for container status ...
	I0110 02:40:52.518282  168670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0110 02:40:52.573403  168670 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001310119s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 02:40:52.573484  168670 out.go:285] * 
	* 
	W0110 02:40:52.573533  168670 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001310119s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001310119s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:40:52.573550  168670 out.go:285] * 
	* 
	W0110 02:40:52.573805  168670 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:40:52.579132  168670 out.go:203] 
	W0110 02:40:52.583084  168670 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001310119s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001310119s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:40:52.583140  168670 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 02:40:52.583163  168670 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 02:40:52.586987  168670 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-088457 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 109
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2026-01-10 02:40:52.652846823 +0000 UTC m=+2836.152830211
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-env-088457
helpers_test.go:244: (dbg) docker inspect force-systemd-env-088457:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9a88fc0a5f69929c7bd85a3deb8ab125564e5327758746f8cca27f48d81defa5",
	        "Created": "2026-01-10T02:32:35.054820467Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 169178,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:32:35.124091119Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/9a88fc0a5f69929c7bd85a3deb8ab125564e5327758746f8cca27f48d81defa5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9a88fc0a5f69929c7bd85a3deb8ab125564e5327758746f8cca27f48d81defa5/hostname",
	        "HostsPath": "/var/lib/docker/containers/9a88fc0a5f69929c7bd85a3deb8ab125564e5327758746f8cca27f48d81defa5/hosts",
	        "LogPath": "/var/lib/docker/containers/9a88fc0a5f69929c7bd85a3deb8ab125564e5327758746f8cca27f48d81defa5/9a88fc0a5f69929c7bd85a3deb8ab125564e5327758746f8cca27f48d81defa5-json.log",
	        "Name": "/force-systemd-env-088457",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-088457:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-088457",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9a88fc0a5f69929c7bd85a3deb8ab125564e5327758746f8cca27f48d81defa5",
	                "LowerDir": "/var/lib/docker/overlay2/2263de8e7aef25816dfe9471d6c95320f4b9d1fd072f8534eac53e6c540623f0-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2263de8e7aef25816dfe9471d6c95320f4b9d1fd072f8534eac53e6c540623f0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2263de8e7aef25816dfe9471d6c95320f4b9d1fd072f8534eac53e6c540623f0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2263de8e7aef25816dfe9471d6c95320f4b9d1fd072f8534eac53e6c540623f0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-088457",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-088457/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-088457",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-088457",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-088457",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df58d2ab63e1d4bc08f86635091a01548b9ef291fb8bd1fab5f5e03cd3205585",
	            "SandboxKey": "/var/run/docker/netns/df58d2ab63e1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33013"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33014"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33017"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33015"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33016"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-088457": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:e7:b6:5c:9c:e6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d16f11dcaaecb8d8e46b67330aaf4776c8021ac2870de29c2273d07c2c8220d4",
	                    "EndpointID": "c880459231e753ad7059e28298ca96d2be0ddf972aa8ad692c336f17fddf7d69",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-088457",
	                        "9a88fc0a5f69"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-088457 -n force-systemd-env-088457
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-088457 -n force-systemd-env-088457: exit status 6 (366.970996ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 02:40:53.078824  192891 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-088457" does not appear in /home/jenkins/minikube-integration/22414-2353/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-088457 logs -n 25
helpers_test.go:261: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-989144 sudo cat /etc/kubernetes/kubelet.conf                                                                      │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo cat /var/lib/kubelet/config.yaml                                                                      │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo systemctl status docker --all --full --no-pager                                                       │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo systemctl cat docker --no-pager                                                                       │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo cat /etc/docker/daemon.json                                                                           │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo docker system info                                                                                    │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo cri-dockerd --version                                                                                 │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo systemctl cat containerd --no-pager                                                                   │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo cat /etc/containerd/config.toml                                                                       │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo containerd config dump                                                                                │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo systemctl cat crio --no-pager                                                                         │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo crio config                                                                                           │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ delete  │ -p cilium-989144                                                                                                            │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │ 10 Jan 26 02:36 UTC │
	│ start   │ -p cert-expiration-213257 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │ 10 Jan 26 02:37 UTC │
	│ start   │ -p cert-expiration-213257 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                   │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ delete  │ -p cert-expiration-213257                                                                                                   │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ start   │ -p force-systemd-flag-038359 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-038359 │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:40:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:40:34.944502  190834 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:40:34.944636  190834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:40:34.944646  190834 out.go:374] Setting ErrFile to fd 2...
	I0110 02:40:34.944652  190834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:40:34.944905  190834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:40:34.945319  190834 out.go:368] Setting JSON to false
	I0110 02:40:34.946128  190834 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4984,"bootTime":1768007851,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:40:34.946196  190834 start.go:143] virtualization:  
	I0110 02:40:34.949943  190834 out.go:179] * [force-systemd-flag-038359] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:40:34.954405  190834 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:40:34.954550  190834 notify.go:221] Checking for updates...
	I0110 02:40:34.962073  190834 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:40:34.965352  190834 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:40:34.968502  190834 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:40:34.971655  190834 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:40:34.975049  190834 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:40:34.978712  190834 config.go:182] Loaded profile config "force-systemd-env-088457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:40:34.978864  190834 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:40:35.005244  190834 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:40:35.005379  190834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:40:35.067761  190834 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:40:35.05878014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:40:35.067930  190834 docker.go:319] overlay module found
	I0110 02:40:35.072665  190834 out.go:179] * Using the docker driver based on user configuration
	I0110 02:40:35.075618  190834 start.go:309] selected driver: docker
	I0110 02:40:35.075634  190834 start.go:928] validating driver "docker" against <nil>
	I0110 02:40:35.075648  190834 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:40:35.076468  190834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:40:35.135115  190834 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:40:35.125911054 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:40:35.135274  190834 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 02:40:35.135531  190834 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 02:40:35.138618  190834 out.go:179] * Using Docker driver with root privileges
	I0110 02:40:35.141654  190834 cni.go:84] Creating CNI manager for ""
	I0110 02:40:35.141718  190834 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:40:35.141732  190834 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 02:40:35.141821  190834 start.go:353] cluster config:
	{Name:force-systemd-flag-038359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-038359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:40:35.144904  190834 out.go:179] * Starting "force-systemd-flag-038359" primary control-plane node in "force-systemd-flag-038359" cluster
	I0110 02:40:35.147789  190834 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:40:35.150933  190834 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:40:35.153896  190834 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:40:35.153864  190834 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:40:35.153995  190834 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 02:40:35.154005  190834 cache.go:65] Caching tarball of preloaded images
	I0110 02:40:35.154091  190834 preload.go:251] Found /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 02:40:35.154101  190834 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:40:35.154216  190834 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/config.json ...
	I0110 02:40:35.154235  190834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/config.json: {Name:mkd7f432d87646b77f41ac9d01b0d3f1947185db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:40:35.202872  190834 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:40:35.202900  190834 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:40:35.202915  190834 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:40:35.202946  190834 start.go:360] acquireMachinesLock for force-systemd-flag-038359: {Name:mk2df15322c6a2e3c70c612564bce9d9870c5bba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:40:35.203079  190834 start.go:364] duration metric: took 118.109µs to acquireMachinesLock for "force-systemd-flag-038359"
	I0110 02:40:35.203107  190834 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-038359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-038359 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:40:35.203183  190834 start.go:125] createHost starting for "" (driver="docker")
	I0110 02:40:35.206679  190834 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:40:35.206980  190834 start.go:159] libmachine.API.Create for "force-systemd-flag-038359" (driver="docker")
	I0110 02:40:35.207032  190834 client.go:173] LocalClient.Create starting
	I0110 02:40:35.207176  190834 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem
	I0110 02:40:35.207273  190834 main.go:144] libmachine: Decoding PEM data...
	I0110 02:40:35.207310  190834 main.go:144] libmachine: Parsing certificate...
	I0110 02:40:35.207441  190834 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem
	I0110 02:40:35.207506  190834 main.go:144] libmachine: Decoding PEM data...
	I0110 02:40:35.207544  190834 main.go:144] libmachine: Parsing certificate...
	I0110 02:40:35.208108  190834 cli_runner.go:164] Run: docker network inspect force-systemd-flag-038359 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:40:35.235486  190834 cli_runner.go:211] docker network inspect force-systemd-flag-038359 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:40:35.235584  190834 network_create.go:284] running [docker network inspect force-systemd-flag-038359] to gather additional debugging logs...
	I0110 02:40:35.235606  190834 cli_runner.go:164] Run: docker network inspect force-systemd-flag-038359
	W0110 02:40:35.260377  190834 cli_runner.go:211] docker network inspect force-systemd-flag-038359 returned with exit code 1
	I0110 02:40:35.260413  190834 network_create.go:287] error running [docker network inspect force-systemd-flag-038359]: docker network inspect force-systemd-flag-038359: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-038359 not found
	I0110 02:40:35.260428  190834 network_create.go:289] output of [docker network inspect force-systemd-flag-038359]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-038359 not found
	
	** /stderr **
	I0110 02:40:35.260516  190834 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:40:35.276949  190834 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dba69832168e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:28:cf:b8:5c:6c} reservation:<nil>}
	I0110 02:40:35.277231  190834 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1bcca9209747 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d2:8b:83:a4:43:ae} reservation:<nil>}
	I0110 02:40:35.277534  190834 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-383ddfacc8f8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:26:ff:fb:97:66:af} reservation:<nil>}
	I0110 02:40:35.277830  190834 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d16f11dcaaec IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:86:ad:01:73:d1:58} reservation:<nil>}
	I0110 02:40:35.278234  190834 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a40ee0}
	I0110 02:40:35.278262  190834 network_create.go:124] attempt to create docker network force-systemd-flag-038359 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 02:40:35.278320  190834 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-038359 force-systemd-flag-038359
	I0110 02:40:35.340331  190834 network_create.go:108] docker network force-systemd-flag-038359 192.168.85.0/24 created
	I0110 02:40:35.340363  190834 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-038359" container
	I0110 02:40:35.340453  190834 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:40:35.354829  190834 cli_runner.go:164] Run: docker volume create force-systemd-flag-038359 --label name.minikube.sigs.k8s.io=force-systemd-flag-038359 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:40:35.371976  190834 oci.go:103] Successfully created a docker volume force-systemd-flag-038359
	I0110 02:40:35.372067  190834 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-038359-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-038359 --entrypoint /usr/bin/test -v force-systemd-flag-038359:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:40:35.906617  190834 oci.go:107] Successfully prepared a docker volume force-systemd-flag-038359
	I0110 02:40:35.906668  190834 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:40:35.906678  190834 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:40:35.906761  190834 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-038359:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:40:39.821199  190834 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-038359:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.914403019s)
	I0110 02:40:39.821230  190834 kic.go:203] duration metric: took 3.914548239s to extract preloaded images to volume ...
	W0110 02:40:39.821363  190834 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 02:40:39.821472  190834 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:40:39.875879  190834 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-038359 --name force-systemd-flag-038359 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-038359 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-038359 --network force-systemd-flag-038359 --ip 192.168.85.2 --volume force-systemd-flag-038359:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 02:40:40.238773  190834 cli_runner.go:164] Run: docker container inspect force-systemd-flag-038359 --format={{.State.Running}}
	I0110 02:40:40.262549  190834 cli_runner.go:164] Run: docker container inspect force-systemd-flag-038359 --format={{.State.Status}}
	I0110 02:40:40.285430  190834 cli_runner.go:164] Run: docker exec force-systemd-flag-038359 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:40:40.334155  190834 oci.go:144] the created container "force-systemd-flag-038359" has a running status.
	I0110 02:40:40.334424  190834 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-flag-038359/id_rsa...
	I0110 02:40:40.949209  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-flag-038359/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0110 02:40:40.949263  190834 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-flag-038359/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:40:40.968573  190834 cli_runner.go:164] Run: docker container inspect force-systemd-flag-038359 --format={{.State.Status}}
	I0110 02:40:40.985291  190834 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:40:40.985316  190834 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-038359 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:40:41.026402  190834 cli_runner.go:164] Run: docker container inspect force-systemd-flag-038359 --format={{.State.Status}}
	I0110 02:40:41.042806  190834 machine.go:94] provisionDockerMachine start ...
	I0110 02:40:41.042892  190834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-038359
	I0110 02:40:41.060224  190834 main.go:144] libmachine: Using SSH client type: native
	I0110 02:40:41.060568  190834 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I0110 02:40:41.060585  190834 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:40:41.061206  190834 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55550->127.0.0.1:33038: read: connection reset by peer
	I0110 02:40:44.211650  190834 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-038359
	
	I0110 02:40:44.211672  190834 ubuntu.go:182] provisioning hostname "force-systemd-flag-038359"
	I0110 02:40:44.211736  190834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-038359
	I0110 02:40:44.234462  190834 main.go:144] libmachine: Using SSH client type: native
	I0110 02:40:44.234776  190834 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I0110 02:40:44.234787  190834 main.go:144] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-038359 && echo "force-systemd-flag-038359" | sudo tee /etc/hostname
	I0110 02:40:44.392717  190834 main.go:144] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-038359
	
	I0110 02:40:44.392834  190834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-038359
	I0110 02:40:44.410530  190834 main.go:144] libmachine: Using SSH client type: native
	I0110 02:40:44.410841  190834 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I0110 02:40:44.410864  190834 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-038359' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-038359/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-038359' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:40:44.556086  190834 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:40:44.556111  190834 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 02:40:44.556140  190834 ubuntu.go:190] setting up certificates
	I0110 02:40:44.556151  190834 provision.go:84] configureAuth start
	I0110 02:40:44.556226  190834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-038359
	I0110 02:40:44.573941  190834 provision.go:143] copyHostCerts
	I0110 02:40:44.573990  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:40:44.574024  190834 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem, removing ...
	I0110 02:40:44.574037  190834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:40:44.574119  190834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 02:40:44.574238  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:40:44.574261  190834 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem, removing ...
	I0110 02:40:44.574269  190834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:40:44.574298  190834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 02:40:44.574350  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:40:44.574371  190834 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem, removing ...
	I0110 02:40:44.574375  190834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:40:44.574400  190834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 02:40:44.574462  190834 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-038359 san=[127.0.0.1 192.168.85.2 force-systemd-flag-038359 localhost minikube]
	I0110 02:40:44.814297  190834 provision.go:177] copyRemoteCerts
	I0110 02:40:44.814369  190834 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:40:44.814411  190834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-038359
	I0110 02:40:44.831134  190834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-flag-038359/id_rsa Username:docker}
	I0110 02:40:44.936929  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0110 02:40:44.937056  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:40:44.961717  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0110 02:40:44.961843  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0110 02:40:44.983543  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0110 02:40:44.983605  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:40:45.001940  190834 provision.go:87] duration metric: took 445.770853ms to configureAuth
	I0110 02:40:45.001967  190834 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:40:45.002190  190834 config.go:182] Loaded profile config "force-systemd-flag-038359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:40:45.002297  190834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-038359
	I0110 02:40:45.042837  190834 main.go:144] libmachine: Using SSH client type: native
	I0110 02:40:45.043160  190834 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33038 <nil> <nil>}
	I0110 02:40:45.043174  190834 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:40:45.403873  190834 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:40:45.403965  190834 machine.go:97] duration metric: took 4.361126495s to provisionDockerMachine
	I0110 02:40:45.403977  190834 client.go:176] duration metric: took 10.196894815s to LocalClient.Create
	I0110 02:40:45.403988  190834 start.go:167] duration metric: took 10.197009117s to libmachine.API.Create "force-systemd-flag-038359"
	I0110 02:40:45.403997  190834 start.go:293] postStartSetup for "force-systemd-flag-038359" (driver="docker")
	I0110 02:40:45.404021  190834 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:40:45.404280  190834 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:40:45.404488  190834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-038359
	I0110 02:40:45.423546  190834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-flag-038359/id_rsa Username:docker}
	I0110 02:40:45.532084  190834 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:40:45.535722  190834 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:40:45.535751  190834 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:40:45.535763  190834 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:40:45.535841  190834 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:40:45.535932  190834 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:40:45.535943  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> /etc/ssl/certs/41682.pem
	I0110 02:40:45.536042  190834 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:40:45.543403  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:40:45.561065  190834 start.go:296] duration metric: took 157.040409ms for postStartSetup
	I0110 02:40:45.561424  190834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-038359
	I0110 02:40:45.577914  190834 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/config.json ...
	I0110 02:40:45.578198  190834 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:40:45.578238  190834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-038359
	I0110 02:40:45.595012  190834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-flag-038359/id_rsa Username:docker}
	I0110 02:40:45.693191  190834 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:40:45.699578  190834 start.go:128] duration metric: took 10.496380194s to createHost
	I0110 02:40:45.699599  190834 start.go:83] releasing machines lock for "force-systemd-flag-038359", held for 10.496511685s
	I0110 02:40:45.699672  190834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-038359
	I0110 02:40:45.717700  190834 ssh_runner.go:195] Run: cat /version.json
	I0110 02:40:45.717750  190834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-038359
	I0110 02:40:45.717774  190834 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:40:45.717835  190834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-038359
	I0110 02:40:45.744589  190834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-flag-038359/id_rsa Username:docker}
	I0110 02:40:45.761323  190834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33038 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/force-systemd-flag-038359/id_rsa Username:docker}
	I0110 02:40:45.847376  190834 ssh_runner.go:195] Run: systemctl --version
	I0110 02:40:45.952627  190834 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:40:45.997307  190834 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:40:46.001599  190834 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:40:46.001671  190834 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:40:46.031377  190834 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 02:40:46.031452  190834 start.go:496] detecting cgroup driver to use...
	I0110 02:40:46.031480  190834 start.go:500] using "systemd" cgroup driver as enforced via flags
	I0110 02:40:46.031575  190834 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:40:46.048742  190834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:40:46.062097  190834 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:40:46.062163  190834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:40:46.080253  190834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:40:46.099779  190834 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:40:46.213218  190834 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:40:46.332605  190834 docker.go:234] disabling docker service ...
	I0110 02:40:46.332678  190834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:40:46.353254  190834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:40:46.366472  190834 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:40:46.502538  190834 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:40:46.619599  190834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:40:46.632499  190834 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:40:46.645800  190834 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:40:46.645881  190834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:40:46.654423  190834 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 02:40:46.654491  190834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:40:46.664375  190834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:40:46.673006  190834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:40:46.681841  190834 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:40:46.689784  190834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:40:46.698245  190834 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:40:46.711547  190834 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:40:46.720303  190834 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:40:46.727637  190834 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:40:46.734885  190834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:40:46.845207  190834 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:40:47.002772  190834 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:40:47.002854  190834 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:40:47.006743  190834 start.go:574] Will wait 60s for crictl version
	I0110 02:40:47.006871  190834 ssh_runner.go:195] Run: which crictl
	I0110 02:40:47.011036  190834 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:40:47.034983  190834 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:40:47.035063  190834 ssh_runner.go:195] Run: crio --version
	I0110 02:40:47.062515  190834 ssh_runner.go:195] Run: crio --version
	I0110 02:40:47.095680  190834 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:40:47.098559  190834 cli_runner.go:164] Run: docker network inspect force-systemd-flag-038359 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:40:47.114186  190834 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 02:40:47.117755  190834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:40:47.127040  190834 kubeadm.go:884] updating cluster {Name:force-systemd-flag-038359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-038359 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:40:47.127150  190834 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:40:47.127212  190834 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:40:47.163981  190834 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:40:47.164005  190834 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:40:47.164059  190834 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:40:47.197690  190834 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:40:47.197710  190834 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:40:47.197717  190834 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.35.0 crio true true} ...
	I0110 02:40:47.197821  190834 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-038359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-038359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:40:47.197901  190834 ssh_runner.go:195] Run: crio config
	I0110 02:40:47.271211  190834 cni.go:84] Creating CNI manager for ""
	I0110 02:40:47.271234  190834 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:40:47.271250  190834 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:40:47.271271  190834 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-038359 NodeName:force-systemd-flag-038359 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:40:47.271414  190834 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-038359"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:40:47.271495  190834 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:40:47.279037  190834 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:40:47.279140  190834 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:40:47.286432  190834 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0110 02:40:47.299136  190834 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:40:47.311712  190834 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I0110 02:40:47.324725  190834 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:40:47.328309  190834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:40:47.337593  190834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:40:47.453947  190834 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:40:47.470677  190834 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359 for IP: 192.168.85.2
	I0110 02:40:47.470717  190834 certs.go:195] generating shared ca certs ...
	I0110 02:40:47.470734  190834 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:40:47.470890  190834 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:40:47.470950  190834 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:40:47.470963  190834 certs.go:257] generating profile certs ...
	I0110 02:40:47.471029  190834 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/client.key
	I0110 02:40:47.471046  190834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/client.crt with IP's: []
	I0110 02:40:47.534793  190834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/client.crt ...
	I0110 02:40:47.534824  190834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/client.crt: {Name:mk9068837c6c8383975dad8341ce74c1b3c1e57c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:40:47.535017  190834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/client.key ...
	I0110 02:40:47.535032  190834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/client.key: {Name:mk783a300ec5c23d62425fe2d5bfd807023e0b6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:40:47.535126  190834 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.key.e7e177af
	I0110 02:40:47.535144  190834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.crt.e7e177af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0110 02:40:48.076389  190834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.crt.e7e177af ...
	I0110 02:40:48.076424  190834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.crt.e7e177af: {Name:mkeb6b0dd9ca6d2b1956a0b711fe2ee9db8bcbe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:40:48.076632  190834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.key.e7e177af ...
	I0110 02:40:48.076649  190834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.key.e7e177af: {Name:mk1a7b696dc969079486c70f177199e2a27ee94e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:40:48.076740  190834 certs.go:382] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.crt.e7e177af -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.crt
	I0110 02:40:48.076822  190834 certs.go:386] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.key.e7e177af -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.key
	I0110 02:40:48.076881  190834 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.key
	I0110 02:40:48.076899  190834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.crt with IP's: []
	I0110 02:40:48.446506  190834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.crt ...
	I0110 02:40:48.446539  190834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.crt: {Name:mk716e4378e7435584b6b60a78214a44f7210922 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:40:48.446724  190834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.key ...
	I0110 02:40:48.446739  190834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.key: {Name:mkd1f41df2eb7d273f6066f14f3710f365fbea1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:40:48.446820  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0110 02:40:48.446843  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0110 02:40:48.446860  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0110 02:40:48.446881  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0110 02:40:48.446902  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0110 02:40:48.446914  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0110 02:40:48.446929  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0110 02:40:48.446939  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0110 02:40:48.446989  190834 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:40:48.447035  190834 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:40:48.447048  190834 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:40:48.447076  190834 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:40:48.447103  190834 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:40:48.447132  190834 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:40:48.447177  190834 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:40:48.447211  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:40:48.447226  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem -> /usr/share/ca-certificates/4168.pem
	I0110 02:40:48.447239  190834 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> /usr/share/ca-certificates/41682.pem
	I0110 02:40:48.447864  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:40:48.467051  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:40:48.484763  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:40:48.503563  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:40:48.520254  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0110 02:40:48.537881  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 02:40:48.555233  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:40:48.572166  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/force-systemd-flag-038359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:40:48.589572  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:40:48.606589  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:40:48.623146  190834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:40:48.640116  190834 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:40:48.652506  190834 ssh_runner.go:195] Run: openssl version
	I0110 02:40:48.658604  190834 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:40:48.666481  190834 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:40:48.675186  190834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:40:48.679294  190834 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:40:48.679357  190834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:40:48.721246  190834 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:40:48.729550  190834 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:40:48.737253  190834 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:40:48.744828  190834 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:40:48.751643  190834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:40:48.755102  190834 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:40:48.755203  190834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:40:48.795628  190834 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:40:48.802737  190834 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4168.pem /etc/ssl/certs/51391683.0
	I0110 02:40:48.809639  190834 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:40:48.816571  190834 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:40:48.823951  190834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:40:48.827510  190834 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:40:48.827569  190834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:40:48.869863  190834 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:40:48.877210  190834 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41682.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:40:48.884193  190834 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:40:48.887853  190834 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:40:48.887906  190834 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-038359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:force-systemd-flag-038359 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:40:48.887977  190834 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:40:48.888044  190834 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:40:48.913860  190834 cri.go:96] found id: ""
	I0110 02:40:48.913981  190834 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:40:48.923939  190834 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:40:48.932369  190834 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:40:48.932466  190834 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:40:48.941319  190834 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:40:48.941339  190834 kubeadm.go:158] found existing configuration files:
	
	I0110 02:40:48.941416  190834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:40:48.949286  190834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:40:48.949375  190834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:40:48.957079  190834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:40:48.964885  190834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:40:48.964979  190834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:40:48.972629  190834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:40:48.981262  190834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:40:48.981359  190834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:40:48.988428  190834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:40:48.995921  190834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:40:48.995996  190834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:40:49.003106  190834 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:40:49.046349  190834 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:40:49.046622  190834 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:40:49.114730  190834 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:40:49.114907  190834 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:40:49.114977  190834 kubeadm.go:319] OS: Linux
	I0110 02:40:49.115060  190834 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:40:49.115141  190834 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:40:49.115221  190834 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:40:49.115301  190834 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:40:49.115381  190834 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:40:49.115462  190834 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:40:49.115537  190834 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:40:49.115617  190834 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:40:49.115697  190834 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:40:49.177733  190834 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:40:49.177909  190834 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:40:49.178056  190834 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:40:49.188195  190834 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:40:49.194618  190834 out.go:252]   - Generating certificates and keys ...
	I0110 02:40:49.194770  190834 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:40:49.194874  190834 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:40:49.875900  190834 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:40:49.923218  190834 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:40:52.034344  168670 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001310119s
	I0110 02:40:52.048168  168670 kubeadm.go:319] 
	I0110 02:40:52.048342  168670 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 02:40:52.048425  168670 kubeadm.go:319] 	- The kubelet is not running
	I0110 02:40:52.048586  168670 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 02:40:52.048637  168670 kubeadm.go:319] 
	I0110 02:40:52.048805  168670 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 02:40:52.048886  168670 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 02:40:52.048956  168670 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 02:40:52.048980  168670 kubeadm.go:319] 
	I0110 02:40:52.050910  168670 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:40:52.051435  168670 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:40:52.051608  168670 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:40:52.051888  168670 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 02:40:52.051903  168670 kubeadm.go:319] 
	I0110 02:40:52.052028  168670 kubeadm.go:403] duration metric: took 8m7.603706251s to StartCluster
	I0110 02:40:52.052063  168670 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0110 02:40:52.052128  168670 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 02:40:52.052282  168670 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 02:40:52.092643  168670 cri.go:96] found id: ""
	I0110 02:40:52.092677  168670 logs.go:282] 0 containers: []
	W0110 02:40:52.092686  168670 logs.go:284] No container was found matching "kube-apiserver"
	I0110 02:40:52.092693  168670 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0110 02:40:52.092751  168670 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 02:40:52.135162  168670 cri.go:96] found id: ""
	I0110 02:40:52.135182  168670 logs.go:282] 0 containers: []
	W0110 02:40:52.135191  168670 logs.go:284] No container was found matching "etcd"
	I0110 02:40:52.135198  168670 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0110 02:40:52.135261  168670 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 02:40:52.166248  168670 cri.go:96] found id: ""
	I0110 02:40:52.166269  168670 logs.go:282] 0 containers: []
	W0110 02:40:52.166277  168670 logs.go:284] No container was found matching "coredns"
	I0110 02:40:52.166284  168670 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0110 02:40:52.166340  168670 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 02:40:52.197395  168670 cri.go:96] found id: ""
	I0110 02:40:52.197416  168670 logs.go:282] 0 containers: []
	W0110 02:40:52.197424  168670 logs.go:284] No container was found matching "kube-scheduler"
	I0110 02:40:52.197430  168670 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0110 02:40:52.197486  168670 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 02:40:52.226595  168670 cri.go:96] found id: ""
	I0110 02:40:52.226615  168670 logs.go:282] 0 containers: []
	W0110 02:40:52.226624  168670 logs.go:284] No container was found matching "kube-proxy"
	I0110 02:40:52.226630  168670 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 02:40:52.226749  168670 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 02:40:52.260867  168670 cri.go:96] found id: ""
	I0110 02:40:52.260940  168670 logs.go:282] 0 containers: []
	W0110 02:40:52.260962  168670 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 02:40:52.260985  168670 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0110 02:40:52.261079  168670 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 02:40:52.298346  168670 cri.go:96] found id: ""
	I0110 02:40:52.298409  168670 logs.go:282] 0 containers: []
	W0110 02:40:52.298440  168670 logs.go:284] No container was found matching "kindnet"
	I0110 02:40:52.298463  168670 logs.go:123] Gathering logs for kubelet ...
	I0110 02:40:52.298488  168670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 02:40:52.372735  168670 logs.go:123] Gathering logs for dmesg ...
	I0110 02:40:52.372815  168670 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0110 02:40:52.387731  168670 logs.go:123] Gathering logs for describe nodes ...
	I0110 02:40:52.387938  168670 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 02:40:52.482423  168670 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 02:40:52.467146    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:52.468391    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:52.469148    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:52.475926    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:52.476243    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 02:40:52.467146    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:52.468391    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:52.469148    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:52.475926    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:52.476243    4907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0110 02:40:52.482491  168670 logs.go:123] Gathering logs for CRI-O ...
	I0110 02:40:52.482528  168670 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0110 02:40:52.518250  168670 logs.go:123] Gathering logs for container status ...
	I0110 02:40:52.518282  168670 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0110 02:40:52.573403  168670 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001310119s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 02:40:52.573484  168670 out.go:285] * 
	W0110 02:40:52.573533  168670 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001310119s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:40:52.573550  168670 out.go:285] * 
	W0110 02:40:52.573805  168670 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:40:52.579132  168670 out.go:203] 
	W0110 02:40:52.583084  168670 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001310119s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:40:52.583140  168670 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 02:40:52.583163  168670 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 02:40:52.586987  168670 out.go:203] 
	
	
	==> CRI-O <==
	Jan 10 02:32:42 force-systemd-env-088457 crio[832]: time="2026-01-10T02:32:42.516962667Z" level=info msg="Registered SIGHUP reload watcher"
	Jan 10 02:32:42 force-systemd-env-088457 crio[832]: time="2026-01-10T02:32:42.516992278Z" level=info msg="Starting seccomp notifier watcher"
	Jan 10 02:32:42 force-systemd-env-088457 crio[832]: time="2026-01-10T02:32:42.517042328Z" level=info msg="Create NRI interface"
	Jan 10 02:32:42 force-systemd-env-088457 crio[832]: time="2026-01-10T02:32:42.517138169Z" level=info msg="built-in NRI default validator is disabled"
	Jan 10 02:32:42 force-systemd-env-088457 crio[832]: time="2026-01-10T02:32:42.517146874Z" level=info msg="runtime interface created"
	Jan 10 02:32:42 force-systemd-env-088457 crio[832]: time="2026-01-10T02:32:42.517157893Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Jan 10 02:32:42 force-systemd-env-088457 crio[832]: time="2026-01-10T02:32:42.517163571Z" level=info msg="runtime interface starting up..."
	Jan 10 02:32:42 force-systemd-env-088457 crio[832]: time="2026-01-10T02:32:42.517169552Z" level=info msg="starting plugins..."
	Jan 10 02:32:42 force-systemd-env-088457 crio[832]: time="2026-01-10T02:32:42.51718154Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Jan 10 02:32:42 force-systemd-env-088457 crio[832]: time="2026-01-10T02:32:42.517249246Z" level=info msg="No systemd watchdog enabled"
	Jan 10 02:32:42 force-systemd-env-088457 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Jan 10 02:32:44 force-systemd-env-088457 crio[832]: time="2026-01-10T02:32:44.752651249Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=599f75c9-0183-4f83-861b-a72a106b5cb9 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:32:44 force-systemd-env-088457 crio[832]: time="2026-01-10T02:32:44.753411068Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=5b807be0-b197-4a84-b45f-63c89747e2f5 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:32:44 force-systemd-env-088457 crio[832]: time="2026-01-10T02:32:44.753948699Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=4a6eb4ef-af12-44c4-99b1-9bddf8c5bece name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:32:44 force-systemd-env-088457 crio[832]: time="2026-01-10T02:32:44.75440205Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=8a4bbacb-8a58-4a1f-a62c-cb9d9c0fd126 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:32:44 force-systemd-env-088457 crio[832]: time="2026-01-10T02:32:44.754854318Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=95ad1699-750c-4fcb-b85c-f3032e5d6b42 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:32:44 force-systemd-env-088457 crio[832]: time="2026-01-10T02:32:44.755333875Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=0afcdd80-869b-494d-9286-6a1a3fc4fe27 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:32:44 force-systemd-env-088457 crio[832]: time="2026-01-10T02:32:44.755815097Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=fbdddbe0-cdc8-417a-9213-bb72755a323b name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:36:49 force-systemd-env-088457 crio[832]: time="2026-01-10T02:36:49.686262357Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0" id=e5aadca7-6e2b-498f-9628-d7eaf1a53233 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:36:49 force-systemd-env-088457 crio[832]: time="2026-01-10T02:36:49.687339665Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0" id=86622968-ed78-406b-a3bd-b0ab67b52b25 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:36:49 force-systemd-env-088457 crio[832]: time="2026-01-10T02:36:49.68819952Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0" id=0309bc12-e003-4c67-ac46-470e16aec8b4 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:36:49 force-systemd-env-088457 crio[832]: time="2026-01-10T02:36:49.688612874Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=5b72b726-9397-4a95-b63c-f28c2c067f11 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:36:49 force-systemd-env-088457 crio[832]: time="2026-01-10T02:36:49.68903645Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=d7f7d278-38cb-4625-afc7-f4d28ed3ca2e name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:36:49 force-systemd-env-088457 crio[832]: time="2026-01-10T02:36:49.689542568Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=bde90d71-8915-44bb-9277-18f0639d6d94 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:36:49 force-systemd-env-088457 crio[832]: time="2026-01-10T02:36:49.690076566Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.6-0" id=3d70d9be-f45d-4ad0-9f6a-83037303f034 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 02:40:53.852241    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:53.852797    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:53.854490    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:53.855072    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:40:53.856720    5026 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jan10 02:07] overlayfs: idmapped layers are currently not supported
	[Jan10 02:08] overlayfs: idmapped layers are currently not supported
	[  +3.770589] overlayfs: idmapped layers are currently not supported
	[ +39.223525] overlayfs: idmapped layers are currently not supported
	[Jan10 02:09] overlayfs: idmapped layers are currently not supported
	[Jan10 02:10] overlayfs: idmapped layers are currently not supported
	[Jan10 02:14] overlayfs: idmapped layers are currently not supported
	[ +27.765975] overlayfs: idmapped layers are currently not supported
	[Jan10 02:15] overlayfs: idmapped layers are currently not supported
	[Jan10 02:16] overlayfs: idmapped layers are currently not supported
	[Jan10 02:17] overlayfs: idmapped layers are currently not supported
	[Jan10 02:18] overlayfs: idmapped layers are currently not supported
	[ +23.212409] overlayfs: idmapped layers are currently not supported
	[  +9.866169] overlayfs: idmapped layers are currently not supported
	[Jan10 02:19] overlayfs: idmapped layers are currently not supported
	[Jan10 02:20] overlayfs: idmapped layers are currently not supported
	[ +35.960240] overlayfs: idmapped layers are currently not supported
	[Jan10 02:21] overlayfs: idmapped layers are currently not supported
	[Jan10 02:23] overlayfs: idmapped layers are currently not supported
	[Jan10 02:24] overlayfs: idmapped layers are currently not supported
	[Jan10 02:25] overlayfs: idmapped layers are currently not supported
	[Jan10 02:30] overlayfs: idmapped layers are currently not supported
	[Jan10 02:31] overlayfs: idmapped layers are currently not supported
	[Jan10 02:35] overlayfs: idmapped layers are currently not supported
	[Jan10 02:37] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 02:40:53 up  1:23,  0 user,  load average: 0.93, 1.20, 1.65
	Linux force-systemd-env-088457 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Jan 10 02:40:51 force-systemd-env-088457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:40:51 force-systemd-env-088457 kubelet[4842]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 02:40:51 force-systemd-env-088457 kubelet[4842]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 02:40:51 force-systemd-env-088457 kubelet[4842]: E0110 02:40:51.725852    4842 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 02:40:51 force-systemd-env-088457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 02:40:51 force-systemd-env-088457 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 02:40:52 force-systemd-env-088457 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Jan 10 02:40:52 force-systemd-env-088457 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:40:52 force-systemd-env-088457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:40:52 force-systemd-env-088457 kubelet[4912]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 02:40:52 force-systemd-env-088457 kubelet[4912]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 02:40:52 force-systemd-env-088457 kubelet[4912]: E0110 02:40:52.543231    4912 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 02:40:52 force-systemd-env-088457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 02:40:52 force-systemd-env-088457 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 02:40:53 force-systemd-env-088457 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Jan 10 02:40:53 force-systemd-env-088457 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:40:53 force-systemd-env-088457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:40:53 force-systemd-env-088457 kubelet[4946]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 02:40:53 force-systemd-env-088457 kubelet[4946]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Jan 10 02:40:53 force-systemd-env-088457 kubelet[4946]: E0110 02:40:53.250588    4946 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Jan 10 02:40:53 force-systemd-env-088457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Jan 10 02:40:53 force-systemd-env-088457 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 10 02:40:53 force-systemd-env-088457 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 649.
	Jan 10 02:40:53 force-systemd-env-088457 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 02:40:53 force-systemd-env-088457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-088457 -n force-systemd-env-088457
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-088457 -n force-systemd-env-088457: exit status 6 (442.598639ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 02:40:54.424505  193114 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-088457" does not appear in /home/jenkins/minikube-integration/22414-2353/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "force-systemd-env-088457" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "force-systemd-env-088457" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-088457
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-088457: (2.011206845s)
--- FAIL: TestForceSystemdEnv (506.58s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-428708 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-428708 --output=json --user=testUser: exit status 80 (1.763442358s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4a986250-dc20-4fa9-a9be-7b655fd57569","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-428708 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"550317ce-0243-4a78-848b-a2db809d925b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2026-01-10T02:11:13Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"6f6db9b2-0b8d-48eb-b40f-39c0e19c2e86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-428708 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.77s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.93s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-428708 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-428708 --output=json --user=testUser: exit status 80 (1.931018477s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a6286788-bd79-4350-aa1d-2cc2de27e213","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-428708 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"6a95c4ef-8e55-468b-81ce-53663e0ed4f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2026-01-10T02:11:15Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"787676c2-7707-4b4d-a83a-7b7faa412124","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-428708 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.93s)

                                                
                                    
x
+
TestPause/serial/Pause (9.21s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-576041 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-576041 --alsologtostderr -v=5: exit status 80 (2.638213088s)

                                                
                                                
-- stdout --
	* Pausing node pause-576041 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:24:22.153158  134047 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:24:22.153284  134047 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:24:22.153295  134047 out.go:374] Setting ErrFile to fd 2...
	I0110 02:24:22.153302  134047 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:24:22.153607  134047 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:24:22.153896  134047 out.go:368] Setting JSON to false
	I0110 02:24:22.153915  134047 mustload.go:66] Loading cluster: pause-576041
	I0110 02:24:22.154382  134047 config.go:182] Loaded profile config "pause-576041": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:24:22.154901  134047 cli_runner.go:164] Run: docker container inspect pause-576041 --format={{.State.Status}}
	I0110 02:24:22.182029  134047 host.go:66] Checking if "pause-576041" exists ...
	I0110 02:24:22.182363  134047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:24:22.269887  134047 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:71 SystemTime:2026-01-10 02:24:22.25729405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:24:22.270777  134047 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22414/minikube-v1.37.0-1767924026-22414-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767924026-22414/minikube-v1.37.0-1767924026-22414-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767924026-22414-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:pause-576041 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true
) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 02:24:22.274517  134047 out.go:179] * Pausing node pause-576041 ... 
	I0110 02:24:22.277364  134047 host.go:66] Checking if "pause-576041" exists ...
	I0110 02:24:22.277696  134047 ssh_runner.go:195] Run: systemctl --version
	I0110 02:24:22.277745  134047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-576041
	I0110 02:24:22.297593  134047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/pause-576041/id_rsa Username:docker}
	I0110 02:24:22.403402  134047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:24:22.418420  134047 pause.go:52] kubelet running: true
	I0110 02:24:22.418489  134047 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:24:22.703648  134047 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:24:22.703740  134047 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:24:22.807579  134047 cri.go:96] found id: "9562469ec4ee2cfb0e8e8a4bbf3cd62d2410380d89c3a36a5ba7417f9b81c48f"
	I0110 02:24:22.807645  134047 cri.go:96] found id: "9049ee99038eb2e2c04d5ec2cf9af20de8bb6e424007ac7baaff8efdbd81693a"
	I0110 02:24:22.807670  134047 cri.go:96] found id: "72a694d48aaa573611a483f5e8a36822399e9343c974f91abcb5a9311c626be9"
	I0110 02:24:22.807688  134047 cri.go:96] found id: "6f88fb6800748d0a3eda5e7f617d907ca7587329ec8a66c6c273b3598f0fb3c0"
	I0110 02:24:22.807721  134047 cri.go:96] found id: "e0a65106d7f56945de0463d25dda3767b8da75a39e8bc887a16a89aa81a85dce"
	I0110 02:24:22.807742  134047 cri.go:96] found id: "a5a358693fec225e5b52304fae2ab85ed7798c520c95158237f942e7a9c43641"
	I0110 02:24:22.807758  134047 cri.go:96] found id: "37aba50ea1d04a32d0842683b6ff0546c6732da93f939a34df3511ba18add8da"
	I0110 02:24:22.807774  134047 cri.go:96] found id: "40062b67b69e410587c91e52ab6543d0898a3e194407fc104c291b0fb61c1029"
	I0110 02:24:22.807827  134047 cri.go:96] found id: "514c3121cc53f0f15de5865e92e27b5e567ee5b237e801e40a76e9ea9900f34c"
	I0110 02:24:22.807853  134047 cri.go:96] found id: "0f01d6c0bb017febe4c27c013b7cfbaf56ed73588af6eb8be8c3a3e977a77b4a"
	I0110 02:24:22.807870  134047 cri.go:96] found id: "ebf7d912141a5fa637ea2b82d2ad6f30d9bc65d7d0ba7b5366b611b45ff2fcf3"
	I0110 02:24:22.807888  134047 cri.go:96] found id: "c6ceb165227b8c34eed391337502d5048d2059ef3f638d21a983f6e2b529db5e"
	I0110 02:24:22.807907  134047 cri.go:96] found id: "9d164e8dba73d71d84e6a3b2c1f639ddd54bf56a4385bd912b51c205d7ab8f5b"
	I0110 02:24:22.807935  134047 cri.go:96] found id: "4ede82649e9dfe8ab749c2972b2df18977a0c58b6ed7e82937851b0d3af45904"
	I0110 02:24:22.807959  134047 cri.go:96] found id: ""
	I0110 02:24:22.808024  134047 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:24:22.818912  134047 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:24:22Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:24:23.041408  134047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:24:23.055359  134047 pause.go:52] kubelet running: false
	I0110 02:24:23.055472  134047 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:24:23.239449  134047 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:24:23.239566  134047 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:24:23.323703  134047 cri.go:96] found id: "9562469ec4ee2cfb0e8e8a4bbf3cd62d2410380d89c3a36a5ba7417f9b81c48f"
	I0110 02:24:23.323722  134047 cri.go:96] found id: "9049ee99038eb2e2c04d5ec2cf9af20de8bb6e424007ac7baaff8efdbd81693a"
	I0110 02:24:23.323727  134047 cri.go:96] found id: "72a694d48aaa573611a483f5e8a36822399e9343c974f91abcb5a9311c626be9"
	I0110 02:24:23.323730  134047 cri.go:96] found id: "6f88fb6800748d0a3eda5e7f617d907ca7587329ec8a66c6c273b3598f0fb3c0"
	I0110 02:24:23.323733  134047 cri.go:96] found id: "e0a65106d7f56945de0463d25dda3767b8da75a39e8bc887a16a89aa81a85dce"
	I0110 02:24:23.323737  134047 cri.go:96] found id: "a5a358693fec225e5b52304fae2ab85ed7798c520c95158237f942e7a9c43641"
	I0110 02:24:23.323740  134047 cri.go:96] found id: "37aba50ea1d04a32d0842683b6ff0546c6732da93f939a34df3511ba18add8da"
	I0110 02:24:23.323743  134047 cri.go:96] found id: "40062b67b69e410587c91e52ab6543d0898a3e194407fc104c291b0fb61c1029"
	I0110 02:24:23.323746  134047 cri.go:96] found id: "514c3121cc53f0f15de5865e92e27b5e567ee5b237e801e40a76e9ea9900f34c"
	I0110 02:24:23.323752  134047 cri.go:96] found id: "0f01d6c0bb017febe4c27c013b7cfbaf56ed73588af6eb8be8c3a3e977a77b4a"
	I0110 02:24:23.323755  134047 cri.go:96] found id: "ebf7d912141a5fa637ea2b82d2ad6f30d9bc65d7d0ba7b5366b611b45ff2fcf3"
	I0110 02:24:23.323758  134047 cri.go:96] found id: "c6ceb165227b8c34eed391337502d5048d2059ef3f638d21a983f6e2b529db5e"
	I0110 02:24:23.323761  134047 cri.go:96] found id: "9d164e8dba73d71d84e6a3b2c1f639ddd54bf56a4385bd912b51c205d7ab8f5b"
	I0110 02:24:23.323764  134047 cri.go:96] found id: "4ede82649e9dfe8ab749c2972b2df18977a0c58b6ed7e82937851b0d3af45904"
	I0110 02:24:23.323767  134047 cri.go:96] found id: ""
	I0110 02:24:23.323847  134047 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:24:23.685213  134047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:24:23.700955  134047 pause.go:52] kubelet running: false
	I0110 02:24:23.701028  134047 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:24:23.877437  134047 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:24:23.877529  134047 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:24:23.957217  134047 cri.go:96] found id: "9562469ec4ee2cfb0e8e8a4bbf3cd62d2410380d89c3a36a5ba7417f9b81c48f"
	I0110 02:24:23.957241  134047 cri.go:96] found id: "9049ee99038eb2e2c04d5ec2cf9af20de8bb6e424007ac7baaff8efdbd81693a"
	I0110 02:24:23.957246  134047 cri.go:96] found id: "72a694d48aaa573611a483f5e8a36822399e9343c974f91abcb5a9311c626be9"
	I0110 02:24:23.957250  134047 cri.go:96] found id: "6f88fb6800748d0a3eda5e7f617d907ca7587329ec8a66c6c273b3598f0fb3c0"
	I0110 02:24:23.957253  134047 cri.go:96] found id: "e0a65106d7f56945de0463d25dda3767b8da75a39e8bc887a16a89aa81a85dce"
	I0110 02:24:23.957257  134047 cri.go:96] found id: "a5a358693fec225e5b52304fae2ab85ed7798c520c95158237f942e7a9c43641"
	I0110 02:24:23.957260  134047 cri.go:96] found id: "37aba50ea1d04a32d0842683b6ff0546c6732da93f939a34df3511ba18add8da"
	I0110 02:24:23.957263  134047 cri.go:96] found id: "40062b67b69e410587c91e52ab6543d0898a3e194407fc104c291b0fb61c1029"
	I0110 02:24:23.957266  134047 cri.go:96] found id: "514c3121cc53f0f15de5865e92e27b5e567ee5b237e801e40a76e9ea9900f34c"
	I0110 02:24:23.957272  134047 cri.go:96] found id: "0f01d6c0bb017febe4c27c013b7cfbaf56ed73588af6eb8be8c3a3e977a77b4a"
	I0110 02:24:23.957276  134047 cri.go:96] found id: "ebf7d912141a5fa637ea2b82d2ad6f30d9bc65d7d0ba7b5366b611b45ff2fcf3"
	I0110 02:24:23.957279  134047 cri.go:96] found id: "c6ceb165227b8c34eed391337502d5048d2059ef3f638d21a983f6e2b529db5e"
	I0110 02:24:23.957282  134047 cri.go:96] found id: "9d164e8dba73d71d84e6a3b2c1f639ddd54bf56a4385bd912b51c205d7ab8f5b"
	I0110 02:24:23.957285  134047 cri.go:96] found id: "4ede82649e9dfe8ab749c2972b2df18977a0c58b6ed7e82937851b0d3af45904"
	I0110 02:24:23.957291  134047 cri.go:96] found id: ""
	I0110 02:24:23.957338  134047 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:24:24.349630  134047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:24:24.363742  134047 pause.go:52] kubelet running: false
	I0110 02:24:24.363842  134047 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:24:24.555936  134047 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:24:24.556024  134047 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:24:24.645396  134047 cri.go:96] found id: "9562469ec4ee2cfb0e8e8a4bbf3cd62d2410380d89c3a36a5ba7417f9b81c48f"
	I0110 02:24:24.645430  134047 cri.go:96] found id: "9049ee99038eb2e2c04d5ec2cf9af20de8bb6e424007ac7baaff8efdbd81693a"
	I0110 02:24:24.645434  134047 cri.go:96] found id: "72a694d48aaa573611a483f5e8a36822399e9343c974f91abcb5a9311c626be9"
	I0110 02:24:24.645438  134047 cri.go:96] found id: "6f88fb6800748d0a3eda5e7f617d907ca7587329ec8a66c6c273b3598f0fb3c0"
	I0110 02:24:24.645441  134047 cri.go:96] found id: "e0a65106d7f56945de0463d25dda3767b8da75a39e8bc887a16a89aa81a85dce"
	I0110 02:24:24.645445  134047 cri.go:96] found id: "a5a358693fec225e5b52304fae2ab85ed7798c520c95158237f942e7a9c43641"
	I0110 02:24:24.645448  134047 cri.go:96] found id: "37aba50ea1d04a32d0842683b6ff0546c6732da93f939a34df3511ba18add8da"
	I0110 02:24:24.645451  134047 cri.go:96] found id: "40062b67b69e410587c91e52ab6543d0898a3e194407fc104c291b0fb61c1029"
	I0110 02:24:24.645455  134047 cri.go:96] found id: "514c3121cc53f0f15de5865e92e27b5e567ee5b237e801e40a76e9ea9900f34c"
	I0110 02:24:24.645463  134047 cri.go:96] found id: "0f01d6c0bb017febe4c27c013b7cfbaf56ed73588af6eb8be8c3a3e977a77b4a"
	I0110 02:24:24.645466  134047 cri.go:96] found id: "ebf7d912141a5fa637ea2b82d2ad6f30d9bc65d7d0ba7b5366b611b45ff2fcf3"
	I0110 02:24:24.645470  134047 cri.go:96] found id: "c6ceb165227b8c34eed391337502d5048d2059ef3f638d21a983f6e2b529db5e"
	I0110 02:24:24.645473  134047 cri.go:96] found id: "9d164e8dba73d71d84e6a3b2c1f639ddd54bf56a4385bd912b51c205d7ab8f5b"
	I0110 02:24:24.645475  134047 cri.go:96] found id: "4ede82649e9dfe8ab749c2972b2df18977a0c58b6ed7e82937851b0d3af45904"
	I0110 02:24:24.645478  134047 cri.go:96] found id: ""
	I0110 02:24:24.645535  134047 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:24:24.674405  134047 out.go:203] 
	W0110 02:24:24.680196  134047 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:24:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:24:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 02:24:24.680228  134047 out.go:285] * 
	* 
	W0110 02:24:24.682973  134047 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:24:24.693254  134047 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-576041 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-576041
helpers_test.go:244: (dbg) docker inspect pause-576041:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6cb286f29b60350b1c1123ab3ff56e0ab94bf3cfe0f1bf5d9cf34d4987896a5",
	        "Created": "2026-01-10T02:23:05.092474941Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 128023,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:23:05.596553492Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/e6cb286f29b60350b1c1123ab3ff56e0ab94bf3cfe0f1bf5d9cf34d4987896a5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6cb286f29b60350b1c1123ab3ff56e0ab94bf3cfe0f1bf5d9cf34d4987896a5/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6cb286f29b60350b1c1123ab3ff56e0ab94bf3cfe0f1bf5d9cf34d4987896a5/hosts",
	        "LogPath": "/var/lib/docker/containers/e6cb286f29b60350b1c1123ab3ff56e0ab94bf3cfe0f1bf5d9cf34d4987896a5/e6cb286f29b60350b1c1123ab3ff56e0ab94bf3cfe0f1bf5d9cf34d4987896a5-json.log",
	        "Name": "/pause-576041",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-576041:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-576041",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6cb286f29b60350b1c1123ab3ff56e0ab94bf3cfe0f1bf5d9cf34d4987896a5",
	                "LowerDir": "/var/lib/docker/overlay2/4e3578eb2637d8217699ae6a3f6d53d0a00f0d116dc494de4f86d3aa748754ef-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e3578eb2637d8217699ae6a3f6d53d0a00f0d116dc494de4f86d3aa748754ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e3578eb2637d8217699ae6a3f6d53d0a00f0d116dc494de4f86d3aa748754ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e3578eb2637d8217699ae6a3f6d53d0a00f0d116dc494de4f86d3aa748754ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-576041",
	                "Source": "/var/lib/docker/volumes/pause-576041/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-576041",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-576041",
	                "name.minikube.sigs.k8s.io": "pause-576041",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8abd76b8d27ac331589a8a2f9292aef26cb23f35fed491f620566b004125ad1a",
	            "SandboxKey": "/var/run/docker/netns/8abd76b8d27a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32963"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32964"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32967"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32965"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32966"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-576041": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:29:5f:c4:75:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0d0e90c6457f2b568d003de1f2b62d56c82a0d73560d58f9439f8e1665f714a3",
	                    "EndpointID": "5be4866c9353bf310bf77c6f760e29bc1aaf3845366c06fb1450e4b44084b131",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-576041",
	                        "e6cb286f29b6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-576041 -n pause-576041
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-576041 -n pause-576041: exit status 2 (445.624059ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-576041 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-576041 logs -n 25: (2.135366073s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ list -p multinode-940034                                                                                         │ multinode-940034            │ jenkins │ v1.37.0 │ 10 Jan 26 02:20 UTC │                     │
	│ start   │ -p multinode-940034-m02 --driver=docker  --container-runtime=crio                                                │ multinode-940034-m02        │ jenkins │ v1.37.0 │ 10 Jan 26 02:20 UTC │                     │
	│ start   │ -p multinode-940034-m03 --driver=docker  --container-runtime=crio                                                │ multinode-940034-m03        │ jenkins │ v1.37.0 │ 10 Jan 26 02:20 UTC │ 10 Jan 26 02:20 UTC │
	│ node    │ add -p multinode-940034                                                                                          │ multinode-940034            │ jenkins │ v1.37.0 │ 10 Jan 26 02:20 UTC │                     │
	│ delete  │ -p multinode-940034-m03                                                                                          │ multinode-940034-m03        │ jenkins │ v1.37.0 │ 10 Jan 26 02:20 UTC │ 10 Jan 26 02:20 UTC │
	│ delete  │ -p multinode-940034                                                                                              │ multinode-940034            │ jenkins │ v1.37.0 │ 10 Jan 26 02:20 UTC │ 10 Jan 26 02:20 UTC │
	│ start   │ -p scheduled-stop-325096 --memory=3072 --driver=docker  --container-runtime=crio                                 │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:20 UTC │ 10 Jan 26 02:21 UTC │
	│ stop    │ -p scheduled-stop-325096 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │                     │
	│ stop    │ -p scheduled-stop-325096 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │                     │
	│ stop    │ -p scheduled-stop-325096 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │                     │
	│ stop    │ -p scheduled-stop-325096 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │                     │
	│ stop    │ -p scheduled-stop-325096 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │                     │
	│ stop    │ -p scheduled-stop-325096 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │                     │
	│ stop    │ -p scheduled-stop-325096 --cancel-scheduled                                                                      │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │ 10 Jan 26 02:21 UTC │
	│ stop    │ -p scheduled-stop-325096 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │                     │
	│ stop    │ -p scheduled-stop-325096 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │                     │
	│ stop    │ -p scheduled-stop-325096 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │ 10 Jan 26 02:22 UTC │
	│ delete  │ -p scheduled-stop-325096                                                                                         │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ start   │ -p insufficient-storage-447390 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio │ insufficient-storage-447390 │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │                     │
	│ delete  │ -p insufficient-storage-447390                                                                                   │ insufficient-storage-447390 │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ start   │ -p pause-576041 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio        │ pause-576041                │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:23 UTC │
	│ start   │ -p missing-upgrade-219545 --memory=3072 --driver=docker  --container-runtime=crio                                │ missing-upgrade-219545      │ jenkins │ v1.35.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:24 UTC │
	│ start   │ -p pause-576041 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ pause-576041                │ jenkins │ v1.37.0 │ 10 Jan 26 02:23 UTC │ 10 Jan 26 02:24 UTC │
	│ start   │ -p missing-upgrade-219545 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio         │ missing-upgrade-219545      │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ pause   │ -p pause-576041 --alsologtostderr -v=5                                                                           │ pause-576041                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:24:01
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:24:01.604608  132852 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:24:01.604771  132852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:24:01.604780  132852 out.go:374] Setting ErrFile to fd 2...
	I0110 02:24:01.604786  132852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:24:01.605067  132852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:24:01.605447  132852 out.go:368] Setting JSON to false
	I0110 02:24:01.606291  132852 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3991,"bootTime":1768007851,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:24:01.606361  132852 start.go:143] virtualization:  
	I0110 02:24:01.612202  132852 out.go:179] * [missing-upgrade-219545] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:24:01.616944  132852 notify.go:221] Checking for updates...
	I0110 02:24:01.617478  132852 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:24:01.621604  132852 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:24:01.624398  132852 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:24:01.627215  132852 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:24:01.630082  132852 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:24:01.632896  132852 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:24:01.636301  132852 config.go:182] Loaded profile config "missing-upgrade-219545": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0110 02:24:01.639696  132852 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I0110 02:24:01.642461  132852 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:24:01.685047  132852 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:24:01.685161  132852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:24:01.791499  132852 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-10 02:24:01.781931729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:24:01.791609  132852 docker.go:319] overlay module found
	I0110 02:24:01.794736  132852 out.go:179] * Using the docker driver based on existing profile
	I0110 02:24:01.797488  132852 start.go:309] selected driver: docker
	I0110 02:24:01.797508  132852 start.go:928] validating driver "docker" against &{Name:missing-upgrade-219545 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-219545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:24:01.797953  132852 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:24:01.798625  132852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:24:01.880683  132852 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-10 02:24:01.869729137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:24:01.880991  132852 cni.go:84] Creating CNI manager for ""
	I0110 02:24:01.881059  132852 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:24:01.881110  132852 start.go:353] cluster config:
	{Name:missing-upgrade-219545 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-219545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:24:01.884278  132852 out.go:179] * Starting "missing-upgrade-219545" primary control-plane node in "missing-upgrade-219545" cluster
	I0110 02:24:01.887052  132852 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:24:01.890034  132852 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:24:01.893199  132852 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0110 02:24:01.893250  132852 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I0110 02:24:01.893260  132852 cache.go:65] Caching tarball of preloaded images
	I0110 02:24:01.893337  132852 preload.go:251] Found /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 02:24:01.893346  132852 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0110 02:24:01.893447  132852 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/missing-upgrade-219545/config.json ...
	I0110 02:24:01.893655  132852 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0110 02:24:01.922533  132852 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0110 02:24:01.922554  132852 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0110 02:24:01.922568  132852 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:24:01.922596  132852 start.go:360] acquireMachinesLock for missing-upgrade-219545: {Name:mk4336ddb56fd92565447cbd148589c9940f25a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:24:01.922653  132852 start.go:364] duration metric: took 36.824µs to acquireMachinesLock for "missing-upgrade-219545"
	I0110 02:24:01.922677  132852 start.go:96] Skipping create...Using existing machine configuration
	I0110 02:24:01.922687  132852 fix.go:54] fixHost starting: 
	I0110 02:24:01.922932  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:01.941895  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:01.941954  132852 fix.go:112] recreateIfNeeded on missing-upgrade-219545: state= err=unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:01.941976  132852 fix.go:117] machineExists: false. err=machine does not exist
	I0110 02:24:01.945178  132852 out.go:179] * docker "missing-upgrade-219545" container is missing, will recreate.
	I0110 02:24:01.322928  132222 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:24:01.322953  132222 machine.go:97] duration metric: took 6.911557826s to provisionDockerMachine
	I0110 02:24:01.322965  132222 start.go:293] postStartSetup for "pause-576041" (driver="docker")
	I0110 02:24:01.322975  132222 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:24:01.323041  132222 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:24:01.323105  132222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-576041
	I0110 02:24:01.343097  132222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/pause-576041/id_rsa Username:docker}
	I0110 02:24:01.458699  132222 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:24:01.463225  132222 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:24:01.463297  132222 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:24:01.463332  132222 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:24:01.463404  132222 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:24:01.463541  132222 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:24:01.463681  132222 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:24:01.503192  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:24:01.527264  132222 start.go:296] duration metric: took 204.284315ms for postStartSetup
	I0110 02:24:01.527358  132222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:24:01.527397  132222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-576041
	I0110 02:24:01.555915  132222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/pause-576041/id_rsa Username:docker}
	I0110 02:24:01.677121  132222 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:24:01.685939  132222 fix.go:56] duration metric: took 7.309419693s for fixHost
	I0110 02:24:01.685961  132222 start.go:83] releasing machines lock for "pause-576041", held for 7.309463114s
	I0110 02:24:01.686028  132222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-576041
	I0110 02:24:01.724258  132222 ssh_runner.go:195] Run: cat /version.json
	I0110 02:24:01.724359  132222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-576041
	I0110 02:24:01.724616  132222 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:24:01.724679  132222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-576041
	I0110 02:24:01.774346  132222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/pause-576041/id_rsa Username:docker}
	I0110 02:24:01.778545  132222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/pause-576041/id_rsa Username:docker}
	I0110 02:24:01.895360  132222 ssh_runner.go:195] Run: systemctl --version
	I0110 02:24:02.020237  132222 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:24:02.076208  132222 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:24:02.081030  132222 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:24:02.081118  132222 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:24:02.094300  132222 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:24:02.094325  132222 start.go:496] detecting cgroup driver to use...
	I0110 02:24:02.094379  132222 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:24:02.094440  132222 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:24:02.112308  132222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:24:02.128659  132222 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:24:02.128750  132222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:24:02.147988  132222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:24:02.163954  132222 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:24:02.315005  132222 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:24:02.464516  132222 docker.go:234] disabling docker service ...
	I0110 02:24:02.464588  132222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:24:02.484650  132222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:24:02.500053  132222 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:24:02.640868  132222 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:24:02.774951  132222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:24:02.788540  132222 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:24:02.801881  132222 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:24:02.801968  132222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:24:02.810761  132222 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:24:02.810837  132222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:24:02.820181  132222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:24:02.828752  132222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:24:02.837540  132222 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:24:02.845693  132222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:24:02.854354  132222 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:24:02.862465  132222 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:24:02.870998  132222 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:24:02.878372  132222 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:24:02.885724  132222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:24:03.014413  132222 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:24:03.222719  132222 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:24:03.222793  132222 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:24:03.226637  132222 start.go:574] Will wait 60s for crictl version
	I0110 02:24:03.226701  132222 ssh_runner.go:195] Run: which crictl
	I0110 02:24:03.230097  132222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:24:03.254626  132222 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:24:03.254718  132222 ssh_runner.go:195] Run: crio --version
	I0110 02:24:03.281958  132222 ssh_runner.go:195] Run: crio --version
	I0110 02:24:03.316317  132222 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:24:03.319189  132222 cli_runner.go:164] Run: docker network inspect pause-576041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:24:03.335073  132222 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:24:03.339007  132222 kubeadm.go:884] updating cluster {Name:pause-576041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-576041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:24:03.339148  132222 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:24:03.339213  132222 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:24:03.379116  132222 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:24:03.379137  132222 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:24:03.379197  132222 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:24:03.404884  132222 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:24:03.404907  132222 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:24:03.404914  132222 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 02:24:03.405010  132222 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-576041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-576041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:24:03.405092  132222 ssh_runner.go:195] Run: crio config
	I0110 02:24:03.474680  132222 cni.go:84] Creating CNI manager for ""
	I0110 02:24:03.474761  132222 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:24:03.474793  132222 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:24:03.474847  132222 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-576041 NodeName:pause-576041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:24:03.475010  132222 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-576041"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:24:03.475100  132222 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:24:03.484150  132222 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:24:03.484222  132222 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:24:03.492517  132222 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0110 02:24:03.505376  132222 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:24:03.520006  132222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0110 02:24:03.533439  132222 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:24:03.537406  132222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:24:03.668184  132222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:24:03.681171  132222 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041 for IP: 192.168.76.2
	I0110 02:24:03.681197  132222 certs.go:195] generating shared ca certs ...
	I0110 02:24:03.681220  132222 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:24:03.681426  132222 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:24:03.681487  132222 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:24:03.681500  132222 certs.go:257] generating profile certs ...
	I0110 02:24:03.681589  132222 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/client.key
	I0110 02:24:03.681663  132222 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/apiserver.key.cd55d7b4
	I0110 02:24:03.681710  132222 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/proxy-client.key
	I0110 02:24:03.681826  132222 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:24:03.681865  132222 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:24:03.681883  132222 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:24:03.681913  132222 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:24:03.681950  132222 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:24:03.681978  132222 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:24:03.682030  132222 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:24:03.682673  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:24:03.702022  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:24:03.718959  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:24:03.737003  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:24:03.754933  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0110 02:24:03.773032  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:24:03.790270  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:24:03.807871  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 02:24:03.825486  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:24:03.842849  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:24:03.859667  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:24:03.876615  132222 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:24:03.890245  132222 ssh_runner.go:195] Run: openssl version
	I0110 02:24:03.898536  132222 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:24:03.906342  132222 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:24:03.914206  132222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:24:03.917942  132222 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:24:03.918026  132222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:24:03.958537  132222 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:24:03.965622  132222 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:24:03.972727  132222 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:24:03.979949  132222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:24:03.983303  132222 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:24:03.983372  132222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:24:04.024484  132222 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:24:04.032234  132222 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:24:04.039727  132222 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:24:04.047116  132222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:24:04.050817  132222 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:24:04.050914  132222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:24:04.092954  132222 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:24:04.100771  132222 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:24:04.104539  132222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:24:04.146105  132222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:24:04.193382  132222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:24:04.237589  132222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:24:04.281497  132222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:24:04.347563  132222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:24:04.419845  132222 kubeadm.go:401] StartCluster: {Name:pause-576041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-576041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:24:04.419984  132222 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:24:04.420089  132222 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:24:04.520647  132222 cri.go:96] found id: "e0a65106d7f56945de0463d25dda3767b8da75a39e8bc887a16a89aa81a85dce"
	I0110 02:24:04.520666  132222 cri.go:96] found id: "a5a358693fec225e5b52304fae2ab85ed7798c520c95158237f942e7a9c43641"
	I0110 02:24:04.520671  132222 cri.go:96] found id: "37aba50ea1d04a32d0842683b6ff0546c6732da93f939a34df3511ba18add8da"
	I0110 02:24:04.520679  132222 cri.go:96] found id: "40062b67b69e410587c91e52ab6543d0898a3e194407fc104c291b0fb61c1029"
	I0110 02:24:04.520683  132222 cri.go:96] found id: "514c3121cc53f0f15de5865e92e27b5e567ee5b237e801e40a76e9ea9900f34c"
	I0110 02:24:04.520688  132222 cri.go:96] found id: "0f01d6c0bb017febe4c27c013b7cfbaf56ed73588af6eb8be8c3a3e977a77b4a"
	I0110 02:24:04.520691  132222 cri.go:96] found id: "ebf7d912141a5fa637ea2b82d2ad6f30d9bc65d7d0ba7b5366b611b45ff2fcf3"
	I0110 02:24:04.520694  132222 cri.go:96] found id: "c6ceb165227b8c34eed391337502d5048d2059ef3f638d21a983f6e2b529db5e"
	I0110 02:24:04.520697  132222 cri.go:96] found id: "9d164e8dba73d71d84e6a3b2c1f639ddd54bf56a4385bd912b51c205d7ab8f5b"
	I0110 02:24:04.520704  132222 cri.go:96] found id: "4ede82649e9dfe8ab749c2972b2df18977a0c58b6ed7e82937851b0d3af45904"
	I0110 02:24:04.520707  132222 cri.go:96] found id: ""
	I0110 02:24:04.520753  132222 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 02:24:04.564698  132222 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:24:04Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:24:04.564779  132222 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:24:04.581047  132222 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:24:04.581063  132222 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:24:04.581112  132222 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:24:04.605167  132222 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:24:04.605816  132222 kubeconfig.go:125] found "pause-576041" server: "https://192.168.76.2:8443"
	I0110 02:24:04.606806  132222 kapi.go:59] client config for pause-576041: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/client.crt", KeyFile:"/home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/client.key", CAFile:"/home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f7bf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0110 02:24:04.607274  132222 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I0110 02:24:04.607286  132222 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0110 02:24:04.607291  132222 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0110 02:24:04.607296  132222 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0110 02:24:04.607300  132222 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I0110 02:24:04.607304  132222 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I0110 02:24:04.607585  132222 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:24:04.628054  132222 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 02:24:04.628132  132222 kubeadm.go:602] duration metric: took 47.063348ms to restartPrimaryControlPlane
	I0110 02:24:04.628158  132222 kubeadm.go:403] duration metric: took 208.32368ms to StartCluster
	I0110 02:24:04.628199  132222 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:24:04.628286  132222 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:24:04.629281  132222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:24:04.629789  132222 config.go:182] Loaded profile config "pause-576041": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:24:04.629571  132222 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:24:04.629892  132222 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:24:04.633891  132222 out.go:179] * Enabled addons: 
	I0110 02:24:04.634002  132222 out.go:179] * Verifying Kubernetes components...
	I0110 02:24:01.947984  132852 delete.go:124] DEMOLISHING missing-upgrade-219545 ...
	I0110 02:24:01.948090  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:01.965426  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	W0110 02:24:01.965488  132852 stop.go:83] unable to get state: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:01.965510  132852 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:01.965978  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:01.982193  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:01.982263  132852 delete.go:82] Unable to get host status for missing-upgrade-219545, assuming it has already been deleted: state: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:01.982331  132852 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-219545
	W0110 02:24:01.997212  132852 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-219545 returned with exit code 1
	I0110 02:24:01.997244  132852 kic.go:371] could not find the container missing-upgrade-219545 to remove it. will try anyways
	I0110 02:24:01.997297  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:02.019641  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	W0110 02:24:02.019716  132852 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:02.019786  132852 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-219545 /bin/bash -c "sudo init 0"
	W0110 02:24:02.040760  132852 cli_runner.go:211] docker exec --privileged -t missing-upgrade-219545 /bin/bash -c "sudo init 0" returned with exit code 1
	I0110 02:24:02.040792  132852 oci.go:659] error shutdown missing-upgrade-219545: docker exec --privileged -t missing-upgrade-219545 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:03.040969  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:03.061848  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:03.061921  132852 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:03.061931  132852 oci.go:673] temporary error: container missing-upgrade-219545 status is  but expect it to be exited
	I0110 02:24:03.061978  132852 retry.go:84] will retry after 600ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:03.632514  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:03.649720  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:03.649789  132852 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:03.649802  132852 oci.go:673] temporary error: container missing-upgrade-219545 status is  but expect it to be exited
	I0110 02:24:04.459231  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:04.477303  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:04.477365  132852 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:04.477374  132852 oci.go:673] temporary error: container missing-upgrade-219545 status is  but expect it to be exited
	I0110 02:24:05.141945  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:05.162731  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:05.162787  132852 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:05.162796  132852 oci.go:673] temporary error: container missing-upgrade-219545 status is  but expect it to be exited
	I0110 02:24:04.638104  132222 addons.go:530] duration metric: took 8.206472ms for enable addons: enabled=[]
	I0110 02:24:04.638228  132222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:24:04.858285  132222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:24:04.876771  132222 node_ready.go:35] waiting up to 6m0s for node "pause-576041" to be "Ready" ...
	I0110 02:24:07.528694  132222 node_ready.go:49] node "pause-576041" is "Ready"
	I0110 02:24:07.528719  132222 node_ready.go:38] duration metric: took 2.651921754s for node "pause-576041" to be "Ready" ...
	I0110 02:24:07.528733  132222 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:24:07.528789  132222 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:24:07.545116  132222 api_server.go:72] duration metric: took 2.915144702s to wait for apiserver process to appear ...
	I0110 02:24:07.545137  132222 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:24:07.545156  132222 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:24:07.564553  132222 api_server.go:325] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0110 02:24:07.564629  132222 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0110 02:24:08.045228  132222 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:24:08.054363  132222 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 02:24:08.054438  132222 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 02:24:08.546207  132222 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:24:08.554290  132222 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:24:08.555346  132222 api_server.go:141] control plane version: v1.35.0
	I0110 02:24:08.555389  132222 api_server.go:131] duration metric: took 1.010245255s to wait for apiserver health ...
	I0110 02:24:08.555398  132222 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:24:08.561287  132222 system_pods.go:59] 7 kube-system pods found
	I0110 02:24:08.561385  132222 system_pods.go:61] "coredns-7d764666f9-7zn9w" [a28f8265-e8a0-4238-820b-c5f2c9f0ebab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:24:08.561409  132222 system_pods.go:61] "etcd-pause-576041" [7863ae93-ee32-47b8-9116-c86b72984d72] Running
	I0110 02:24:08.561443  132222 system_pods.go:61] "kindnet-59pgj" [46732288-4236-4a2f-b2f5-bd8a52bf77f9] Running
	I0110 02:24:08.561468  132222 system_pods.go:61] "kube-apiserver-pause-576041" [3bc247ab-e27a-4946-adeb-e9cda426217e] Running
	I0110 02:24:08.561490  132222 system_pods.go:61] "kube-controller-manager-pause-576041" [3f2e82c9-6bad-4218-8f2d-36e08d86f432] Running
	I0110 02:24:08.561528  132222 system_pods.go:61] "kube-proxy-qndk4" [c970f294-9561-4cb9-9b89-51b8f125f19a] Running
	I0110 02:24:08.561553  132222 system_pods.go:61] "kube-scheduler-pause-576041" [59fe2dc7-7b6c-434d-9ab5-6decf26bed87] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:24:08.561573  132222 system_pods.go:74] duration metric: took 6.167938ms to wait for pod list to return data ...
	I0110 02:24:08.561612  132222 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:24:08.564302  132222 default_sa.go:45] found service account: "default"
	I0110 02:24:08.564334  132222 default_sa.go:55] duration metric: took 2.694887ms for default service account to be created ...
	I0110 02:24:08.564343  132222 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:24:08.567212  132222 system_pods.go:86] 7 kube-system pods found
	I0110 02:24:08.567281  132222 system_pods.go:89] "coredns-7d764666f9-7zn9w" [a28f8265-e8a0-4238-820b-c5f2c9f0ebab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:24:08.567296  132222 system_pods.go:89] "etcd-pause-576041" [7863ae93-ee32-47b8-9116-c86b72984d72] Running
	I0110 02:24:08.567304  132222 system_pods.go:89] "kindnet-59pgj" [46732288-4236-4a2f-b2f5-bd8a52bf77f9] Running
	I0110 02:24:08.567309  132222 system_pods.go:89] "kube-apiserver-pause-576041" [3bc247ab-e27a-4946-adeb-e9cda426217e] Running
	I0110 02:24:08.567314  132222 system_pods.go:89] "kube-controller-manager-pause-576041" [3f2e82c9-6bad-4218-8f2d-36e08d86f432] Running
	I0110 02:24:08.567320  132222 system_pods.go:89] "kube-proxy-qndk4" [c970f294-9561-4cb9-9b89-51b8f125f19a] Running
	I0110 02:24:08.567328  132222 system_pods.go:89] "kube-scheduler-pause-576041" [59fe2dc7-7b6c-434d-9ab5-6decf26bed87] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:24:08.567338  132222 system_pods.go:126] duration metric: took 2.989576ms to wait for k8s-apps to be running ...
	I0110 02:24:08.567365  132222 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:24:08.567421  132222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:24:08.587603  132222 system_svc.go:56] duration metric: took 20.229355ms WaitForService to wait for kubelet
	I0110 02:24:08.587640  132222 kubeadm.go:587] duration metric: took 3.957666925s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:24:08.587659  132222 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:24:08.591784  132222 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:24:08.591905  132222 node_conditions.go:123] node cpu capacity is 2
	I0110 02:24:08.591922  132222 node_conditions.go:105] duration metric: took 4.257911ms to run NodePressure ...
	I0110 02:24:08.591936  132222 start.go:242] waiting for startup goroutines ...
	I0110 02:24:08.591946  132222 start.go:247] waiting for cluster config update ...
	I0110 02:24:08.591958  132222 start.go:256] writing updated cluster config ...
	I0110 02:24:08.592250  132222 ssh_runner.go:195] Run: rm -f paused
	I0110 02:24:08.597956  132222 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:24:08.598577  132222 kapi.go:59] client config for pause-576041: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/client.crt", KeyFile:"/home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/client.key", CAFile:"/home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f7bf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0110 02:24:08.602015  132222 pod_ready.go:83] waiting for pod "coredns-7d764666f9-7zn9w" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:07.038048  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:07.075191  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:07.075246  132852 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:07.075255  132852 oci.go:673] temporary error: container missing-upgrade-219545 status is  but expect it to be exited
	I0110 02:24:10.651955  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:10.675916  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:10.675974  132852 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:10.675983  132852 oci.go:673] temporary error: container missing-upgrade-219545 status is  but expect it to be exited
	I0110 02:24:10.676018  132852 retry.go:84] will retry after 2.6s: couldn't verify container is exited. %v: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	W0110 02:24:10.636705  132222 pod_ready.go:104] pod "coredns-7d764666f9-7zn9w" is not "Ready", error: <nil>
	W0110 02:24:13.106760  132222 pod_ready.go:104] pod "coredns-7d764666f9-7zn9w" is not "Ready", error: <nil>
	I0110 02:24:13.301882  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:13.320235  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:13.320297  132852 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:13.320319  132852 oci.go:673] temporary error: container missing-upgrade-219545 status is  but expect it to be exited
	W0110 02:24:15.108693  132222 pod_ready.go:104] pod "coredns-7d764666f9-7zn9w" is not "Ready", error: <nil>
	I0110 02:24:16.607719  132222 pod_ready.go:94] pod "coredns-7d764666f9-7zn9w" is "Ready"
	I0110 02:24:16.607742  132222 pod_ready.go:86] duration metric: took 8.005706677s for pod "coredns-7d764666f9-7zn9w" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:16.611370  132222 pod_ready.go:83] waiting for pod "etcd-pause-576041" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 02:24:18.616226  132222 pod_ready.go:104] pod "etcd-pause-576041" is not "Ready", error: <nil>
	I0110 02:24:20.284812  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:20.299563  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:20.299664  132852 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:20.299673  132852 oci.go:673] temporary error: container missing-upgrade-219545 status is  but expect it to be exited
	I0110 02:24:20.299705  132852 oci.go:88] couldn't shut down missing-upgrade-219545 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	 
	I0110 02:24:20.299765  132852 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-219545
	I0110 02:24:20.313469  132852 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-219545
	W0110 02:24:20.328092  132852 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-219545 returned with exit code 1
	I0110 02:24:20.328178  132852 cli_runner.go:164] Run: docker network inspect missing-upgrade-219545 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:24:20.343650  132852 cli_runner.go:164] Run: docker network rm missing-upgrade-219545
	I0110 02:24:20.446086  132852 fix.go:124] Sleeping 1 second for extra luck!
	I0110 02:24:21.446248  132852 start.go:125] createHost starting for "" (driver="docker")
	I0110 02:24:21.449567  132852 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:24:21.449693  132852 start.go:159] libmachine.API.Create for "missing-upgrade-219545" (driver="docker")
	I0110 02:24:21.449726  132852 client.go:173] LocalClient.Create starting
	I0110 02:24:21.449812  132852 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem
	I0110 02:24:21.449846  132852 main.go:144] libmachine: Decoding PEM data...
	I0110 02:24:21.449862  132852 main.go:144] libmachine: Parsing certificate...
	I0110 02:24:21.449909  132852 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem
	I0110 02:24:21.449925  132852 main.go:144] libmachine: Decoding PEM data...
	I0110 02:24:21.449936  132852 main.go:144] libmachine: Parsing certificate...
	I0110 02:24:21.450218  132852 cli_runner.go:164] Run: docker network inspect missing-upgrade-219545 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:24:21.465601  132852 cli_runner.go:211] docker network inspect missing-upgrade-219545 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:24:21.465686  132852 network_create.go:284] running [docker network inspect missing-upgrade-219545] to gather additional debugging logs...
	I0110 02:24:21.465703  132852 cli_runner.go:164] Run: docker network inspect missing-upgrade-219545
	W0110 02:24:21.492185  132852 cli_runner.go:211] docker network inspect missing-upgrade-219545 returned with exit code 1
	I0110 02:24:21.492216  132852 network_create.go:287] error running [docker network inspect missing-upgrade-219545]: docker network inspect missing-upgrade-219545: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-219545 not found
	I0110 02:24:21.492229  132852 network_create.go:289] output of [docker network inspect missing-upgrade-219545]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-219545 not found
	
	** /stderr **
	I0110 02:24:21.492330  132852 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:24:21.508226  132852 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dba69832168e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:28:cf:b8:5c:6c} reservation:<nil>}
	I0110 02:24:21.508497  132852 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1bcca9209747 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d2:8b:83:a4:43:ae} reservation:<nil>}
	I0110 02:24:21.508843  132852 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-383ddfacc8f8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:26:ff:fb:97:66:af} reservation:<nil>}
	I0110 02:24:21.509156  132852 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0d0e90c6457f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:12:6e:77:5b:2f:89} reservation:<nil>}
	I0110 02:24:21.509543  132852 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ba1640}
	I0110 02:24:21.509570  132852 network_create.go:124] attempt to create docker network missing-upgrade-219545 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 02:24:21.509624  132852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-219545 missing-upgrade-219545
	I0110 02:24:21.570786  132852 network_create.go:108] docker network missing-upgrade-219545 192.168.85.0/24 created
	I0110 02:24:21.570818  132852 kic.go:121] calculated static IP "192.168.85.2" for the "missing-upgrade-219545" container
	I0110 02:24:21.570896  132852 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:24:21.586573  132852 cli_runner.go:164] Run: docker volume create missing-upgrade-219545 --label name.minikube.sigs.k8s.io=missing-upgrade-219545 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:24:21.603343  132852 oci.go:103] Successfully created a docker volume missing-upgrade-219545
	I0110 02:24:21.603451  132852 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-219545-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-219545 --entrypoint /usr/bin/test -v missing-upgrade-219545:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	W0110 02:24:20.616570  132222 pod_ready.go:104] pod "etcd-pause-576041" is not "Ready", error: <nil>
	I0110 02:24:21.116788  132222 pod_ready.go:94] pod "etcd-pause-576041" is "Ready"
	I0110 02:24:21.116817  132222 pod_ready.go:86] duration metric: took 4.505416876s for pod "etcd-pause-576041" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:21.119012  132222 pod_ready.go:83] waiting for pod "kube-apiserver-pause-576041" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:21.123388  132222 pod_ready.go:94] pod "kube-apiserver-pause-576041" is "Ready"
	I0110 02:24:21.123421  132222 pod_ready.go:86] duration metric: took 4.383824ms for pod "kube-apiserver-pause-576041" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:21.125598  132222 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-576041" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:21.129694  132222 pod_ready.go:94] pod "kube-controller-manager-pause-576041" is "Ready"
	I0110 02:24:21.129722  132222 pod_ready.go:86] duration metric: took 4.096135ms for pod "kube-controller-manager-pause-576041" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:21.132068  132222 pod_ready.go:83] waiting for pod "kube-proxy-qndk4" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:21.315275  132222 pod_ready.go:94] pod "kube-proxy-qndk4" is "Ready"
	I0110 02:24:21.315304  132222 pod_ready.go:86] duration metric: took 183.210591ms for pod "kube-proxy-qndk4" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:21.515603  132222 pod_ready.go:83] waiting for pod "kube-scheduler-pause-576041" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:21.915425  132222 pod_ready.go:94] pod "kube-scheduler-pause-576041" is "Ready"
	I0110 02:24:21.915451  132222 pod_ready.go:86] duration metric: took 399.823351ms for pod "kube-scheduler-pause-576041" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:21.915464  132222 pod_ready.go:40] duration metric: took 13.317476874s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:24:21.997899  132222 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 02:24:22.001575  132222 out.go:203] 
	W0110 02:24:22.006321  132222 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 02:24:22.009187  132222 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:24:22.013207  132222 out.go:179] * Done! kubectl is now configured to use "pause-576041" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.567723071Z" level=info msg="Created container 6f88fb6800748d0a3eda5e7f617d907ca7587329ec8a66c6c273b3598f0fb3c0: kube-system/kube-controller-manager-pause-576041/kube-controller-manager" id=6d99df54-be88-43dc-970b-a998b8308fe1 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.568352669Z" level=info msg="Starting container: 6f88fb6800748d0a3eda5e7f617d907ca7587329ec8a66c6c273b3598f0fb3c0" id=55f8c7b7-5f6f-4d4f-9026-b7eaee287008 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.571911462Z" level=info msg="Started container" PID=2407 containerID=6f88fb6800748d0a3eda5e7f617d907ca7587329ec8a66c6c273b3598f0fb3c0 description=kube-system/kube-controller-manager-pause-576041/kube-controller-manager id=55f8c7b7-5f6f-4d4f-9026-b7eaee287008 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6992482774c9a4b2272ebb2e4213be1409a94eeca0ef096def95f1aea4f11b64
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.58077391Z" level=info msg="Created container 72a694d48aaa573611a483f5e8a36822399e9343c974f91abcb5a9311c626be9: kube-system/etcd-pause-576041/etcd" id=2741f99a-f49e-413d-b4ca-e7169e430465 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.587204201Z" level=info msg="Starting container: 72a694d48aaa573611a483f5e8a36822399e9343c974f91abcb5a9311c626be9" id=30203f6d-b224-4a74-9177-592e43d920bc name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.593975974Z" level=info msg="Created container 9049ee99038eb2e2c04d5ec2cf9af20de8bb6e424007ac7baaff8efdbd81693a: kube-system/kube-apiserver-pause-576041/kube-apiserver" id=de8f60bd-7208-4a1a-8605-dd3473050ce4 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.595295525Z" level=info msg="Starting container: 9049ee99038eb2e2c04d5ec2cf9af20de8bb6e424007ac7baaff8efdbd81693a" id=bb217005-0280-4e5d-99e1-cd83ed34332b name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.616465929Z" level=info msg="Started container" PID=2416 containerID=9049ee99038eb2e2c04d5ec2cf9af20de8bb6e424007ac7baaff8efdbd81693a description=kube-system/kube-apiserver-pause-576041/kube-apiserver id=bb217005-0280-4e5d-99e1-cd83ed34332b name=/runtime.v1.RuntimeService/StartContainer sandboxID=71797c67dfe44dc08583d1b8c5ed5454a925eab113637243ed399eb51514067b
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.63076725Z" level=info msg="Started container" PID=2395 containerID=72a694d48aaa573611a483f5e8a36822399e9343c974f91abcb5a9311c626be9 description=kube-system/etcd-pause-576041/etcd id=30203f6d-b224-4a74-9177-592e43d920bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=e306fe050d150ca3f74a1f7b3bd52864d015fa93dd4b97d1687fc722bcd48fd4
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.977807801Z" level=info msg="Created container 9562469ec4ee2cfb0e8e8a4bbf3cd62d2410380d89c3a36a5ba7417f9b81c48f: kube-system/kube-proxy-qndk4/kube-proxy" id=dc73b72f-fcb9-4dc8-98ca-6e2ee067adfb name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.978382352Z" level=info msg="Starting container: 9562469ec4ee2cfb0e8e8a4bbf3cd62d2410380d89c3a36a5ba7417f9b81c48f" id=939ad828-5e40-4b52-a1ea-d8da8af7fcbd name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.980782042Z" level=info msg="Started container" PID=2455 containerID=9562469ec4ee2cfb0e8e8a4bbf3cd62d2410380d89c3a36a5ba7417f9b81c48f description=kube-system/kube-proxy-qndk4/kube-proxy id=939ad828-5e40-4b52-a1ea-d8da8af7fcbd name=/runtime.v1.RuntimeService/StartContainer sandboxID=907937be1a57c6e0b8786a4ad4af141eb909420648315ae4d461d99e91798d35
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.902607831Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.903006641Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.906777712Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.906806946Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.910493703Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.91052512Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.914286764Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.914318222Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.918057794Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.918090983Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.918120406Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.92183666Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.921868528Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	9562469ec4ee2       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     21 seconds ago       Running             kube-proxy                1                   907937be1a57c       kube-proxy-qndk4                       kube-system
	9049ee99038eb       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     21 seconds ago       Running             kube-apiserver            1                   71797c67dfe44       kube-apiserver-pause-576041            kube-system
	72a694d48aaa5       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     21 seconds ago       Running             etcd                      1                   e306fe050d150       etcd-pause-576041                      kube-system
	6f88fb6800748       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     21 seconds ago       Running             kube-controller-manager   1                   6992482774c9a       kube-controller-manager-pause-576041   kube-system
	e0a65106d7f56       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     21 seconds ago       Running             coredns                   1                   02bdd497fe881       coredns-7d764666f9-7zn9w               kube-system
	a5a358693fec2       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     21 seconds ago       Running             kindnet-cni               1                   1f414281b5c96       kindnet-59pgj                          kube-system
	37aba50ea1d04       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     21 seconds ago       Running             kube-scheduler            1                   a03151a273f24       kube-scheduler-pause-576041            kube-system
	40062b67b69e4       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     34 seconds ago       Exited              coredns                   0                   02bdd497fe881       coredns-7d764666f9-7zn9w               kube-system
	514c3121cc53f       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   45 seconds ago       Exited              kindnet-cni               0                   1f414281b5c96       kindnet-59pgj                          kube-system
	0f01d6c0bb017       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     48 seconds ago       Exited              kube-proxy                0                   907937be1a57c       kube-proxy-qndk4                       kube-system
	ebf7d912141a5       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     About a minute ago   Exited              kube-scheduler            0                   a03151a273f24       kube-scheduler-pause-576041            kube-system
	c6ceb165227b8       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     About a minute ago   Exited              kube-controller-manager   0                   6992482774c9a       kube-controller-manager-pause-576041   kube-system
	9d164e8dba73d       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     About a minute ago   Exited              kube-apiserver            0                   71797c67dfe44       kube-apiserver-pause-576041            kube-system
	4ede82649e9df       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     About a minute ago   Exited              etcd                      0                   e306fe050d150       etcd-pause-576041                      kube-system
	
	
	==> coredns [40062b67b69e410587c91e52ab6543d0898a3e194407fc104c291b0fb61c1029] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:47102 - 14504 "HINFO IN 6167918437151035469.106571216106473206. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.029521102s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e0a65106d7f56945de0463d25dda3767b8da75a39e8bc887a16a89aa81a85dce] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:34108 - 30496 "HINFO IN 390612155315855389.3968940569599401408. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021915896s
	
	
	==> describe nodes <==
	Name:               pause-576041
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-576041
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=pause-576041
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_23_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:23:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-576041
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:24:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:24:13 +0000   Sat, 10 Jan 2026 02:23:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:24:13 +0000   Sat, 10 Jan 2026 02:23:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:24:13 +0000   Sat, 10 Jan 2026 02:23:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:24:13 +0000   Sat, 10 Jan 2026 02:23:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-576041
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                8d82fef8-7100-4f25-83c5-2910dfec2fa5
	  Boot ID:                    41f52236-76b9-4dc8-bfe3-121fbdeb9659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-7zn9w                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     50s
	  kube-system                 etcd-pause-576041                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         54s
	  kube-system                 kindnet-59pgj                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      50s
	  kube-system                 kube-apiserver-pause-576041             250m (12%)    0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-controller-manager-pause-576041    200m (10%)    0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-proxy-qndk4                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-scheduler-pause-576041             100m (5%)     0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  51s   node-controller  Node pause-576041 event: Registered Node pause-576041 in Controller
	  Normal  RegisteredNode  16s   node-controller  Node pause-576041 event: Registered Node pause-576041 in Controller
	
	
	==> dmesg <==
	[Jan10 02:04] overlayfs: idmapped layers are currently not supported
	[ +31.979586] hrtimer: interrupt took 16812983 ns
	[Jan10 02:05] overlayfs: idmapped layers are currently not supported
	[Jan10 02:06] overlayfs: idmapped layers are currently not supported
	[  +3.406975] overlayfs: idmapped layers are currently not supported
	[ +26.439263] overlayfs: idmapped layers are currently not supported
	[Jan10 02:07] overlayfs: idmapped layers are currently not supported
	[Jan10 02:08] overlayfs: idmapped layers are currently not supported
	[  +3.770589] overlayfs: idmapped layers are currently not supported
	[ +39.223525] overlayfs: idmapped layers are currently not supported
	[Jan10 02:09] overlayfs: idmapped layers are currently not supported
	[Jan10 02:10] overlayfs: idmapped layers are currently not supported
	[Jan10 02:14] overlayfs: idmapped layers are currently not supported
	[ +27.765975] overlayfs: idmapped layers are currently not supported
	[Jan10 02:15] overlayfs: idmapped layers are currently not supported
	[Jan10 02:16] overlayfs: idmapped layers are currently not supported
	[Jan10 02:17] overlayfs: idmapped layers are currently not supported
	[Jan10 02:18] overlayfs: idmapped layers are currently not supported
	[ +23.212409] overlayfs: idmapped layers are currently not supported
	[  +9.866169] overlayfs: idmapped layers are currently not supported
	[Jan10 02:19] overlayfs: idmapped layers are currently not supported
	[Jan10 02:20] overlayfs: idmapped layers are currently not supported
	[ +35.960240] overlayfs: idmapped layers are currently not supported
	[Jan10 02:21] overlayfs: idmapped layers are currently not supported
	[Jan10 02:23] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4ede82649e9dfe8ab749c2972b2df18977a0c58b6ed7e82937851b0d3af45904] <==
	{"level":"info","ts":"2026-01-10T02:23:24.039462Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:23:24.039619Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:23:24.040649Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:23:24.040828Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T02:23:24.040960Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T02:23:24.058050Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T02:23:24.077368Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:23:56.049098Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2026-01-10T02:23:56.049144Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-576041","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2026-01-10T02:23:56.049242Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2026-01-10T02:23:56.344909Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2026-01-10T02:23:56.345079Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-10T02:23:56.345143Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2026-01-10T02:23:56.345274Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2026-01-10T02:23:56.345323Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2026-01-10T02:23:56.345619Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2026-01-10T02:23:56.345766Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2026-01-10T02:23:56.345820Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2026-01-10T02:23:56.345736Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2026-01-10T02:23:56.345916Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2026-01-10T02:23:56.345959Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-10T02:23:56.348640Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2026-01-10T02:23:56.348731Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-10T02:23:56.348772Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:23:56.348793Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-576041","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [72a694d48aaa573611a483f5e8a36822399e9343c974f91abcb5a9311c626be9] <==
	{"level":"info","ts":"2026-01-10T02:24:04.878984Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:24:04.878993Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:24:04.879167Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:24:04.879183Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:24:04.880379Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T02:24:04.887905Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:24:04.888073Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T02:24:05.723005Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:24:05.723073Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:24:05.723126Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:24:05.723189Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:24:05.723217Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:24:05.724260Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:24:05.724282Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:24:05.724303Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:24:05.724311Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:24:05.726473Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-576041 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:24:05.726566Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:24:05.726759Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:24:05.726935Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:24:05.726984Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:24:05.727539Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:24:05.727644Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:24:05.729557Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:24:05.729815Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 02:24:26 up  1:06,  0 user,  load average: 3.58, 2.26, 1.86
	Linux pause-576041 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [514c3121cc53f0f15de5865e92e27b5e567ee5b237e801e40a76e9ea9900f34c] <==
	I0110 02:23:40.426267       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:23:40.427179       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 02:23:40.427343       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:23:40.427382       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:23:40.427418       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:23:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:23:40.628373       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:23:40.628456       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:23:40.628516       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:23:40.630210       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:23:40.828815       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:23:40.828841       1 metrics.go:72] Registering metrics
	I0110 02:23:40.828922       1 controller.go:711] "Syncing nftables rules"
	I0110 02:23:50.628917       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:23:50.629701       1 main.go:301] handling current node
	
	
	==> kindnet [a5a358693fec225e5b52304fae2ab85ed7798c520c95158237f942e7a9c43641] <==
	I0110 02:24:04.552717       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:24:04.552899       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 02:24:04.553009       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:24:04.553021       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:24:04.553032       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:24:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:24:04.897874       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:24:04.897984       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:24:04.898029       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:24:04.904682       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:24:07.704950       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:24:07.704991       1 metrics.go:72] Registering metrics
	I0110 02:24:07.705060       1 controller.go:711] "Syncing nftables rules"
	I0110 02:24:14.897784       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:24:14.897919       1 main.go:301] handling current node
	I0110 02:24:24.899945       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:24:24.899995       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9049ee99038eb2e2c04d5ec2cf9af20de8bb6e424007ac7baaff8efdbd81693a] <==
	I0110 02:24:07.600604       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0110 02:24:07.600945       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:07.601377       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 02:24:07.602112       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:07.602175       1 policy_source.go:248] refreshing policies
	I0110 02:24:07.602250       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:07.606099       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 02:24:07.610899       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:24:07.612976       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:07.613500       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0110 02:24:07.613647       1 aggregator.go:187] initial CRD sync complete...
	I0110 02:24:07.613665       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 02:24:07.613671       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:24:07.613676       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:24:07.626201       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 02:24:07.626582       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	E0110 02:24:07.640626       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 02:24:07.662294       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:24:07.693136       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 02:24:08.301737       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:24:09.504093       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:24:10.956258       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:24:11.055980       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:24:11.109341       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:24:11.208046       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [9d164e8dba73d71d84e6a3b2c1f639ddd54bf56a4385bd912b51c205d7ab8f5b] <==
	W0110 02:23:56.082525       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082554       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082582       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082612       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082640       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082668       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082697       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082727       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082757       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082789       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082818       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082847       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082876       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082906       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082939       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.083100       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.083133       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.083163       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.083193       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.083223       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.083251       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.083281       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.083332       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.083360       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [6f88fb6800748d0a3eda5e7f617d907ca7587329ec8a66c6c273b3598f0fb3c0] <==
	I0110 02:24:10.708914       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.708964       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.709177       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.709248       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.710367       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.710564       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.710624       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.716696       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.716991       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:24:10.717007       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.717015       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.717514       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.717905       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.718302       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.718634       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.718781       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.718789       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.719034       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.719045       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.719052       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.751047       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.806836       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.806862       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:24:10.806874       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:24:10.823307       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [c6ceb165227b8c34eed391337502d5048d2059ef3f638d21a983f6e2b529db5e] <==
	I0110 02:23:35.691552       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.691722       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.691786       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.691915       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.691971       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.692527       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.702154       1 range_allocator.go:433] "Set node PodCIDR" node="pause-576041" podCIDRs=["10.244.0.0/24"]
	I0110 02:23:35.718373       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:23:35.737771       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.753958       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.773840       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.773926       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.774051       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 02:23:35.774101       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.774218       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.774626       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.775072       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-576041"
	I0110 02:23:35.775149       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0110 02:23:35.775177       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.798734       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.802961       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.802990       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:23:35.802997       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:23:35.828352       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:50.778358       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [0f01d6c0bb017febe4c27c013b7cfbaf56ed73588af6eb8be8c3a3e977a77b4a] <==
	I0110 02:23:37.738984       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:23:37.833487       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:23:37.934228       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:37.934314       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 02:23:37.934405       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:23:38.104765       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:23:38.104821       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:23:38.195839       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:23:38.196171       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:23:38.196184       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:23:38.253626       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:23:38.253982       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:23:38.258574       1 config.go:200] "Starting service config controller"
	I0110 02:23:38.259982       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:23:38.260375       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:23:38.260383       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:23:38.260885       1 config.go:309] "Starting node config controller"
	I0110 02:23:38.260902       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:23:38.260909       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:23:38.358315       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 02:23:38.360566       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:23:38.360592       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [9562469ec4ee2cfb0e8e8a4bbf3cd62d2410380d89c3a36a5ba7417f9b81c48f] <==
	I0110 02:24:05.504225       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:24:05.585432       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:24:07.690123       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:07.690290       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 02:24:07.690405       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:24:07.744859       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:24:07.744976       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:24:07.754090       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:24:07.754681       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:24:07.754754       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:24:07.758198       1 config.go:200] "Starting service config controller"
	I0110 02:24:07.758285       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:24:07.758335       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:24:07.758365       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:24:07.758404       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:24:07.758429       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:24:07.760499       1 config.go:309] "Starting node config controller"
	I0110 02:24:07.763442       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:24:07.763545       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:24:07.860676       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:24:07.860785       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 02:24:07.860850       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [37aba50ea1d04a32d0842683b6ff0546c6732da93f939a34df3511ba18add8da] <==
	I0110 02:24:05.632010       1 serving.go:386] Generated self-signed cert in-memory
	W0110 02:24:07.521969       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 02:24:07.522084       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 02:24:07.522096       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 02:24:07.522105       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 02:24:07.589144       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 02:24:07.589181       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:24:07.599269       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 02:24:07.599418       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:24:07.599435       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:24:07.599450       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 02:24:07.699714       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [ebf7d912141a5fa637ea2b82d2ad6f30d9bc65d7d0ba7b5366b611b45ff2fcf3] <==
	E0110 02:23:28.086747       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 02:23:28.086796       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 02:23:28.086843       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 02:23:28.876435       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 02:23:28.917739       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 02:23:28.919789       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 02:23:28.964534       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 02:23:28.986885       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 02:23:29.001002       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 02:23:29.025318       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 02:23:29.088782       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 02:23:29.129868       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E0110 02:23:29.258682       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 02:23:29.258855       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 02:23:29.260007       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 02:23:29.281453       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 02:23:29.291891       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 02:23:29.389028       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 02:23:29.410813       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	I0110 02:23:32.249071       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:56.084672       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0110 02:23:56.088988       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0110 02:23:56.089233       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0110 02:23:56.089831       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0110 02:23:56.090120       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jan 10 02:24:07 pause-576041 kubelet[1322]: E0110 02:24:07.542282    1322 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-proxy-qndk4\" is forbidden: User \"system:node:pause-576041\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-576041' and this object" podUID="c970f294-9561-4cb9-9b89-51b8f125f19a" pod="kube-system/kube-proxy-qndk4"
	Jan 10 02:24:07 pause-576041 kubelet[1322]: E0110 02:24:07.562390    1322 status_manager.go:1045] "Failed to get status for pod" err="pods \"kindnet-59pgj\" is forbidden: User \"system:node:pause-576041\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-576041' and this object" podUID="46732288-4236-4a2f-b2f5-bd8a52bf77f9" pod="kube-system/kindnet-59pgj"
	Jan 10 02:24:07 pause-576041 kubelet[1322]: E0110 02:24:07.571243    1322 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-7zn9w\" is forbidden: User \"system:node:pause-576041\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-576041' and this object" podUID="a28f8265-e8a0-4238-820b-c5f2c9f0ebab" pod="kube-system/coredns-7d764666f9-7zn9w"
	Jan 10 02:24:07 pause-576041 kubelet[1322]: E0110 02:24:07.580500    1322 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-pause-576041\" is forbidden: User \"system:node:pause-576041\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-576041' and this object" podUID="5a769ecb9185aeb6fbadab33b29b8ba1" pod="kube-system/kube-scheduler-pause-576041"
	Jan 10 02:24:07 pause-576041 kubelet[1322]: E0110 02:24:07.585834    1322 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-pause-576041\" is forbidden: User \"system:node:pause-576041\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-576041' and this object" podUID="7e9a2688d0b2d3262e09e1a6d40e3885" pod="kube-system/etcd-pause-576041"
	Jan 10 02:24:07 pause-576041 kubelet[1322]: E0110 02:24:07.591032    1322 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-pause-576041\" is forbidden: User \"system:node:pause-576041\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-576041' and this object" podUID="d834128d1a7733bcd7019344856fedce" pod="kube-system/kube-apiserver-pause-576041"
	Jan 10 02:24:07 pause-576041 kubelet[1322]: E0110 02:24:07.592089    1322 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-pause-576041\" is forbidden: User \"system:node:pause-576041\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-576041' and this object" podUID="7e9a2688d0b2d3262e09e1a6d40e3885" pod="kube-system/etcd-pause-576041"
	Jan 10 02:24:07 pause-576041 kubelet[1322]: E0110 02:24:07.593200    1322 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-pause-576041\" is forbidden: User \"system:node:pause-576041\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-576041' and this object" podUID="d834128d1a7733bcd7019344856fedce" pod="kube-system/kube-apiserver-pause-576041"
	Jan 10 02:24:07 pause-576041 kubelet[1322]: E0110 02:24:07.598510    1322 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-576041\" is forbidden: User \"system:node:pause-576041\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-576041' and this object" podUID="08f781c18c5c021e670af99d45edae77" pod="kube-system/kube-controller-manager-pause-576041"
	Jan 10 02:24:08 pause-576041 kubelet[1322]: E0110 02:24:08.156681    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-576041" containerName="kube-controller-manager"
	Jan 10 02:24:10 pause-576041 kubelet[1322]: E0110 02:24:10.804300    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-576041" containerName="etcd"
	Jan 10 02:24:12 pause-576041 kubelet[1322]: W0110 02:24:12.264152    1322 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Jan 10 02:24:14 pause-576041 kubelet[1322]: E0110 02:24:14.267202    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-576041" containerName="kube-scheduler"
	Jan 10 02:24:14 pause-576041 kubelet[1322]: E0110 02:24:14.463984    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-576041" containerName="kube-scheduler"
	Jan 10 02:24:14 pause-576041 kubelet[1322]: E0110 02:24:14.933483    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-576041" containerName="kube-apiserver"
	Jan 10 02:24:15 pause-576041 kubelet[1322]: E0110 02:24:15.466785    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-576041" containerName="kube-apiserver"
	Jan 10 02:24:15 pause-576041 kubelet[1322]: E0110 02:24:15.467951    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-576041" containerName="kube-scheduler"
	Jan 10 02:24:16 pause-576041 kubelet[1322]: E0110 02:24:16.449004    1322 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-7zn9w" containerName="coredns"
	Jan 10 02:24:16 pause-576041 kubelet[1322]: E0110 02:24:16.469268    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-576041" containerName="kube-apiserver"
	Jan 10 02:24:18 pause-576041 kubelet[1322]: E0110 02:24:18.164662    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-576041" containerName="kube-controller-manager"
	Jan 10 02:24:20 pause-576041 kubelet[1322]: E0110 02:24:20.805015    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-576041" containerName="etcd"
	Jan 10 02:24:21 pause-576041 kubelet[1322]: E0110 02:24:21.481794    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-576041" containerName="etcd"
	Jan 10 02:24:22 pause-576041 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:24:22 pause-576041 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:24:22 pause-576041 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-576041 -n pause-576041
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-576041 -n pause-576041: exit status 2 (748.842266ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-576041 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-576041
helpers_test.go:244: (dbg) docker inspect pause-576041:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6cb286f29b60350b1c1123ab3ff56e0ab94bf3cfe0f1bf5d9cf34d4987896a5",
	        "Created": "2026-01-10T02:23:05.092474941Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 128023,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:23:05.596553492Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/e6cb286f29b60350b1c1123ab3ff56e0ab94bf3cfe0f1bf5d9cf34d4987896a5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6cb286f29b60350b1c1123ab3ff56e0ab94bf3cfe0f1bf5d9cf34d4987896a5/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6cb286f29b60350b1c1123ab3ff56e0ab94bf3cfe0f1bf5d9cf34d4987896a5/hosts",
	        "LogPath": "/var/lib/docker/containers/e6cb286f29b60350b1c1123ab3ff56e0ab94bf3cfe0f1bf5d9cf34d4987896a5/e6cb286f29b60350b1c1123ab3ff56e0ab94bf3cfe0f1bf5d9cf34d4987896a5-json.log",
	        "Name": "/pause-576041",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-576041:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-576041",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6cb286f29b60350b1c1123ab3ff56e0ab94bf3cfe0f1bf5d9cf34d4987896a5",
	                "LowerDir": "/var/lib/docker/overlay2/4e3578eb2637d8217699ae6a3f6d53d0a00f0d116dc494de4f86d3aa748754ef-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4e3578eb2637d8217699ae6a3f6d53d0a00f0d116dc494de4f86d3aa748754ef/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4e3578eb2637d8217699ae6a3f6d53d0a00f0d116dc494de4f86d3aa748754ef/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4e3578eb2637d8217699ae6a3f6d53d0a00f0d116dc494de4f86d3aa748754ef/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-576041",
	                "Source": "/var/lib/docker/volumes/pause-576041/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-576041",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-576041",
	                "name.minikube.sigs.k8s.io": "pause-576041",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8abd76b8d27ac331589a8a2f9292aef26cb23f35fed491f620566b004125ad1a",
	            "SandboxKey": "/var/run/docker/netns/8abd76b8d27a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32963"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32964"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32967"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32965"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32966"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-576041": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:29:5f:c4:75:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0d0e90c6457f2b568d003de1f2b62d56c82a0d73560d58f9439f8e1665f714a3",
	                    "EndpointID": "5be4866c9353bf310bf77c6f760e29bc1aaf3845366c06fb1450e4b44084b131",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-576041",
	                        "e6cb286f29b6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-576041 -n pause-576041
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-576041 -n pause-576041: exit status 2 (557.313419ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-576041 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-576041 logs -n 25: (1.608535699s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                       │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ list -p multinode-940034                                                                                         │ multinode-940034            │ jenkins │ v1.37.0 │ 10 Jan 26 02:20 UTC │                     │
	│ start   │ -p multinode-940034-m02 --driver=docker  --container-runtime=crio                                                │ multinode-940034-m02        │ jenkins │ v1.37.0 │ 10 Jan 26 02:20 UTC │                     │
	│ start   │ -p multinode-940034-m03 --driver=docker  --container-runtime=crio                                                │ multinode-940034-m03        │ jenkins │ v1.37.0 │ 10 Jan 26 02:20 UTC │ 10 Jan 26 02:20 UTC │
	│ node    │ add -p multinode-940034                                                                                          │ multinode-940034            │ jenkins │ v1.37.0 │ 10 Jan 26 02:20 UTC │                     │
	│ delete  │ -p multinode-940034-m03                                                                                          │ multinode-940034-m03        │ jenkins │ v1.37.0 │ 10 Jan 26 02:20 UTC │ 10 Jan 26 02:20 UTC │
	│ delete  │ -p multinode-940034                                                                                              │ multinode-940034            │ jenkins │ v1.37.0 │ 10 Jan 26 02:20 UTC │ 10 Jan 26 02:20 UTC │
	│ start   │ -p scheduled-stop-325096 --memory=3072 --driver=docker  --container-runtime=crio                                 │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:20 UTC │ 10 Jan 26 02:21 UTC │
	│ stop    │ -p scheduled-stop-325096 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │                     │
	│ stop    │ -p scheduled-stop-325096 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │                     │
	│ stop    │ -p scheduled-stop-325096 --schedule 5m -v=5 --alsologtostderr                                                    │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │                     │
	│ stop    │ -p scheduled-stop-325096 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │                     │
	│ stop    │ -p scheduled-stop-325096 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │                     │
	│ stop    │ -p scheduled-stop-325096 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │                     │
	│ stop    │ -p scheduled-stop-325096 --cancel-scheduled                                                                      │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │ 10 Jan 26 02:21 UTC │
	│ stop    │ -p scheduled-stop-325096 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │                     │
	│ stop    │ -p scheduled-stop-325096 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │                     │
	│ stop    │ -p scheduled-stop-325096 --schedule 15s -v=5 --alsologtostderr                                                   │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:21 UTC │ 10 Jan 26 02:22 UTC │
	│ delete  │ -p scheduled-stop-325096                                                                                         │ scheduled-stop-325096       │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ start   │ -p insufficient-storage-447390 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio │ insufficient-storage-447390 │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │                     │
	│ delete  │ -p insufficient-storage-447390                                                                                   │ insufficient-storage-447390 │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:22 UTC │
	│ start   │ -p pause-576041 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio        │ pause-576041                │ jenkins │ v1.37.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:23 UTC │
	│ start   │ -p missing-upgrade-219545 --memory=3072 --driver=docker  --container-runtime=crio                                │ missing-upgrade-219545      │ jenkins │ v1.35.0 │ 10 Jan 26 02:22 UTC │ 10 Jan 26 02:24 UTC │
	│ start   │ -p pause-576041 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ pause-576041                │ jenkins │ v1.37.0 │ 10 Jan 26 02:23 UTC │ 10 Jan 26 02:24 UTC │
	│ start   │ -p missing-upgrade-219545 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio         │ missing-upgrade-219545      │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	│ pause   │ -p pause-576041 --alsologtostderr -v=5                                                                           │ pause-576041                │ jenkins │ v1.37.0 │ 10 Jan 26 02:24 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:24:01
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:24:01.604608  132852 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:24:01.604771  132852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:24:01.604780  132852 out.go:374] Setting ErrFile to fd 2...
	I0110 02:24:01.604786  132852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:24:01.605067  132852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:24:01.605447  132852 out.go:368] Setting JSON to false
	I0110 02:24:01.606291  132852 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":3991,"bootTime":1768007851,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:24:01.606361  132852 start.go:143] virtualization:  
	I0110 02:24:01.612202  132852 out.go:179] * [missing-upgrade-219545] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:24:01.616944  132852 notify.go:221] Checking for updates...
	I0110 02:24:01.617478  132852 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:24:01.621604  132852 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:24:01.624398  132852 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:24:01.627215  132852 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:24:01.630082  132852 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:24:01.632896  132852 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:24:01.636301  132852 config.go:182] Loaded profile config "missing-upgrade-219545": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0110 02:24:01.639696  132852 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I0110 02:24:01.642461  132852 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:24:01.685047  132852 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:24:01.685161  132852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:24:01.791499  132852 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-10 02:24:01.781931729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:24:01.791609  132852 docker.go:319] overlay module found
	I0110 02:24:01.794736  132852 out.go:179] * Using the docker driver based on existing profile
	I0110 02:24:01.797488  132852 start.go:309] selected driver: docker
	I0110 02:24:01.797508  132852 start.go:928] validating driver "docker" against &{Name:missing-upgrade-219545 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-219545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:24:01.797953  132852 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:24:01.798625  132852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:24:01.880683  132852 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-10 02:24:01.869729137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:24:01.880991  132852 cni.go:84] Creating CNI manager for ""
	I0110 02:24:01.881059  132852 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:24:01.881110  132852 start.go:353] cluster config:
	{Name:missing-upgrade-219545 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:missing-upgrade-219545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:24:01.884278  132852 out.go:179] * Starting "missing-upgrade-219545" primary control-plane node in "missing-upgrade-219545" cluster
	I0110 02:24:01.887052  132852 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:24:01.890034  132852 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:24:01.893199  132852 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0110 02:24:01.893250  132852 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4
	I0110 02:24:01.893260  132852 cache.go:65] Caching tarball of preloaded images
	I0110 02:24:01.893337  132852 preload.go:251] Found /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 02:24:01.893346  132852 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I0110 02:24:01.893447  132852 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/missing-upgrade-219545/config.json ...
	I0110 02:24:01.893655  132852 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0110 02:24:01.922533  132852 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0110 02:24:01.922554  132852 cache.go:158] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0110 02:24:01.922568  132852 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:24:01.922596  132852 start.go:360] acquireMachinesLock for missing-upgrade-219545: {Name:mk4336ddb56fd92565447cbd148589c9940f25a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:24:01.922653  132852 start.go:364] duration metric: took 36.824µs to acquireMachinesLock for "missing-upgrade-219545"
	I0110 02:24:01.922677  132852 start.go:96] Skipping create...Using existing machine configuration
	I0110 02:24:01.922687  132852 fix.go:54] fixHost starting: 
	I0110 02:24:01.922932  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:01.941895  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:01.941954  132852 fix.go:112] recreateIfNeeded on missing-upgrade-219545: state= err=unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:01.941976  132852 fix.go:117] machineExists: false. err=machine does not exist
	I0110 02:24:01.945178  132852 out.go:179] * docker "missing-upgrade-219545" container is missing, will recreate.
	I0110 02:24:01.322928  132222 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:24:01.322953  132222 machine.go:97] duration metric: took 6.911557826s to provisionDockerMachine
	I0110 02:24:01.322965  132222 start.go:293] postStartSetup for "pause-576041" (driver="docker")
	I0110 02:24:01.322975  132222 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:24:01.323041  132222 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:24:01.323105  132222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-576041
	I0110 02:24:01.343097  132222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/pause-576041/id_rsa Username:docker}
	I0110 02:24:01.458699  132222 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:24:01.463225  132222 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:24:01.463297  132222 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:24:01.463332  132222 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:24:01.463404  132222 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:24:01.463541  132222 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:24:01.463681  132222 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:24:01.503192  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:24:01.527264  132222 start.go:296] duration metric: took 204.284315ms for postStartSetup
	I0110 02:24:01.527358  132222 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:24:01.527397  132222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-576041
	I0110 02:24:01.555915  132222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/pause-576041/id_rsa Username:docker}
	I0110 02:24:01.677121  132222 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:24:01.685939  132222 fix.go:56] duration metric: took 7.309419693s for fixHost
	I0110 02:24:01.685961  132222 start.go:83] releasing machines lock for "pause-576041", held for 7.309463114s
	I0110 02:24:01.686028  132222 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-576041
	I0110 02:24:01.724258  132222 ssh_runner.go:195] Run: cat /version.json
	I0110 02:24:01.724359  132222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-576041
	I0110 02:24:01.724616  132222 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:24:01.724679  132222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-576041
	I0110 02:24:01.774346  132222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/pause-576041/id_rsa Username:docker}
	I0110 02:24:01.778545  132222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/pause-576041/id_rsa Username:docker}
	I0110 02:24:01.895360  132222 ssh_runner.go:195] Run: systemctl --version
	I0110 02:24:02.020237  132222 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:24:02.076208  132222 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:24:02.081030  132222 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:24:02.081118  132222 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:24:02.094300  132222 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:24:02.094325  132222 start.go:496] detecting cgroup driver to use...
	I0110 02:24:02.094379  132222 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:24:02.094440  132222 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:24:02.112308  132222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:24:02.128659  132222 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:24:02.128750  132222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:24:02.147988  132222 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:24:02.163954  132222 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:24:02.315005  132222 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:24:02.464516  132222 docker.go:234] disabling docker service ...
	I0110 02:24:02.464588  132222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:24:02.484650  132222 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:24:02.500053  132222 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:24:02.640868  132222 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:24:02.774951  132222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:24:02.788540  132222 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:24:02.801881  132222 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:24:02.801968  132222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:24:02.810761  132222 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:24:02.810837  132222 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:24:02.820181  132222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:24:02.828752  132222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:24:02.837540  132222 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:24:02.845693  132222 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:24:02.854354  132222 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:24:02.862465  132222 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:24:02.870998  132222 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:24:02.878372  132222 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:24:02.885724  132222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:24:03.014413  132222 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:24:03.222719  132222 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:24:03.222793  132222 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:24:03.226637  132222 start.go:574] Will wait 60s for crictl version
	I0110 02:24:03.226701  132222 ssh_runner.go:195] Run: which crictl
	I0110 02:24:03.230097  132222 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:24:03.254626  132222 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:24:03.254718  132222 ssh_runner.go:195] Run: crio --version
	I0110 02:24:03.281958  132222 ssh_runner.go:195] Run: crio --version
	I0110 02:24:03.316317  132222 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:24:03.319189  132222 cli_runner.go:164] Run: docker network inspect pause-576041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:24:03.335073  132222 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:24:03.339007  132222 kubeadm.go:884] updating cluster {Name:pause-576041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-576041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:24:03.339148  132222 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:24:03.339213  132222 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:24:03.379116  132222 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:24:03.379137  132222 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:24:03.379197  132222 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:24:03.404884  132222 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:24:03.404907  132222 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:24:03.404914  132222 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 02:24:03.405010  132222 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-576041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-576041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:24:03.405092  132222 ssh_runner.go:195] Run: crio config
	I0110 02:24:03.474680  132222 cni.go:84] Creating CNI manager for ""
	I0110 02:24:03.474761  132222 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:24:03.474793  132222 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:24:03.474847  132222 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-576041 NodeName:pause-576041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:24:03.475010  132222 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-576041"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:24:03.475100  132222 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:24:03.484150  132222 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:24:03.484222  132222 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:24:03.492517  132222 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0110 02:24:03.505376  132222 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:24:03.520006  132222 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0110 02:24:03.533439  132222 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:24:03.537406  132222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:24:03.668184  132222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:24:03.681171  132222 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041 for IP: 192.168.76.2
	I0110 02:24:03.681197  132222 certs.go:195] generating shared ca certs ...
	I0110 02:24:03.681220  132222 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:24:03.681426  132222 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:24:03.681487  132222 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:24:03.681500  132222 certs.go:257] generating profile certs ...
	I0110 02:24:03.681589  132222 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/client.key
	I0110 02:24:03.681663  132222 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/apiserver.key.cd55d7b4
	I0110 02:24:03.681710  132222 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/proxy-client.key
	I0110 02:24:03.681826  132222 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:24:03.681865  132222 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:24:03.681883  132222 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:24:03.681913  132222 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:24:03.681950  132222 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:24:03.681978  132222 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:24:03.682030  132222 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:24:03.682673  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:24:03.702022  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:24:03.718959  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:24:03.737003  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:24:03.754933  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0110 02:24:03.773032  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:24:03.790270  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:24:03.807871  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 02:24:03.825486  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:24:03.842849  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:24:03.859667  132222 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:24:03.876615  132222 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:24:03.890245  132222 ssh_runner.go:195] Run: openssl version
	I0110 02:24:03.898536  132222 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:24:03.906342  132222 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:24:03.914206  132222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:24:03.917942  132222 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:24:03.918026  132222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:24:03.958537  132222 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:24:03.965622  132222 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:24:03.972727  132222 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:24:03.979949  132222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:24:03.983303  132222 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:24:03.983372  132222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:24:04.024484  132222 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:24:04.032234  132222 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:24:04.039727  132222 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:24:04.047116  132222 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:24:04.050817  132222 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:24:04.050914  132222 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:24:04.092954  132222 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:24:04.100771  132222 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:24:04.104539  132222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:24:04.146105  132222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:24:04.193382  132222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:24:04.237589  132222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:24:04.281497  132222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:24:04.347563  132222 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:24:04.419845  132222 kubeadm.go:401] StartCluster: {Name:pause-576041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-576041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:24:04.419984  132222 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:24:04.420089  132222 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:24:04.520647  132222 cri.go:96] found id: "e0a65106d7f56945de0463d25dda3767b8da75a39e8bc887a16a89aa81a85dce"
	I0110 02:24:04.520666  132222 cri.go:96] found id: "a5a358693fec225e5b52304fae2ab85ed7798c520c95158237f942e7a9c43641"
	I0110 02:24:04.520671  132222 cri.go:96] found id: "37aba50ea1d04a32d0842683b6ff0546c6732da93f939a34df3511ba18add8da"
	I0110 02:24:04.520679  132222 cri.go:96] found id: "40062b67b69e410587c91e52ab6543d0898a3e194407fc104c291b0fb61c1029"
	I0110 02:24:04.520683  132222 cri.go:96] found id: "514c3121cc53f0f15de5865e92e27b5e567ee5b237e801e40a76e9ea9900f34c"
	I0110 02:24:04.520688  132222 cri.go:96] found id: "0f01d6c0bb017febe4c27c013b7cfbaf56ed73588af6eb8be8c3a3e977a77b4a"
	I0110 02:24:04.520691  132222 cri.go:96] found id: "ebf7d912141a5fa637ea2b82d2ad6f30d9bc65d7d0ba7b5366b611b45ff2fcf3"
	I0110 02:24:04.520694  132222 cri.go:96] found id: "c6ceb165227b8c34eed391337502d5048d2059ef3f638d21a983f6e2b529db5e"
	I0110 02:24:04.520697  132222 cri.go:96] found id: "9d164e8dba73d71d84e6a3b2c1f639ddd54bf56a4385bd912b51c205d7ab8f5b"
	I0110 02:24:04.520704  132222 cri.go:96] found id: "4ede82649e9dfe8ab749c2972b2df18977a0c58b6ed7e82937851b0d3af45904"
	I0110 02:24:04.520707  132222 cri.go:96] found id: ""
	I0110 02:24:04.520753  132222 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 02:24:04.564698  132222 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:24:04Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:24:04.564779  132222 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:24:04.581047  132222 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:24:04.581063  132222 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:24:04.581112  132222 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:24:04.605167  132222 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:24:04.605816  132222 kubeconfig.go:125] found "pause-576041" server: "https://192.168.76.2:8443"
	I0110 02:24:04.606806  132222 kapi.go:59] client config for pause-576041: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/client.crt", KeyFile:"/home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/client.key", CAFile:"/home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f7bf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0110 02:24:04.607274  132222 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I0110 02:24:04.607286  132222 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0110 02:24:04.607291  132222 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0110 02:24:04.607296  132222 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0110 02:24:04.607300  132222 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I0110 02:24:04.607304  132222 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I0110 02:24:04.607585  132222 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:24:04.628054  132222 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 02:24:04.628132  132222 kubeadm.go:602] duration metric: took 47.063348ms to restartPrimaryControlPlane
	I0110 02:24:04.628158  132222 kubeadm.go:403] duration metric: took 208.32368ms to StartCluster
	I0110 02:24:04.628199  132222 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:24:04.628286  132222 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:24:04.629281  132222 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:24:04.629789  132222 config.go:182] Loaded profile config "pause-576041": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:24:04.629571  132222 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:24:04.629892  132222 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:24:04.633891  132222 out.go:179] * Enabled addons: 
	I0110 02:24:04.634002  132222 out.go:179] * Verifying Kubernetes components...
	I0110 02:24:01.947984  132852 delete.go:124] DEMOLISHING missing-upgrade-219545 ...
	I0110 02:24:01.948090  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:01.965426  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	W0110 02:24:01.965488  132852 stop.go:83] unable to get state: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:01.965510  132852 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:01.965978  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:01.982193  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:01.982263  132852 delete.go:82] Unable to get host status for missing-upgrade-219545, assuming it has already been deleted: state: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:01.982331  132852 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-219545
	W0110 02:24:01.997212  132852 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-219545 returned with exit code 1
	I0110 02:24:01.997244  132852 kic.go:371] could not find the container missing-upgrade-219545 to remove it. will try anyways
	I0110 02:24:01.997297  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:02.019641  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	W0110 02:24:02.019716  132852 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:02.019786  132852 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-219545 /bin/bash -c "sudo init 0"
	W0110 02:24:02.040760  132852 cli_runner.go:211] docker exec --privileged -t missing-upgrade-219545 /bin/bash -c "sudo init 0" returned with exit code 1
	I0110 02:24:02.040792  132852 oci.go:659] error shutdown missing-upgrade-219545: docker exec --privileged -t missing-upgrade-219545 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:03.040969  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:03.061848  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:03.061921  132852 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:03.061931  132852 oci.go:673] temporary error: container missing-upgrade-219545 status is  but expect it to be exited
	I0110 02:24:03.061978  132852 retry.go:84] will retry after 600ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:03.632514  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:03.649720  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:03.649789  132852 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:03.649802  132852 oci.go:673] temporary error: container missing-upgrade-219545 status is  but expect it to be exited
	I0110 02:24:04.459231  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:04.477303  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:04.477365  132852 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:04.477374  132852 oci.go:673] temporary error: container missing-upgrade-219545 status is  but expect it to be exited
	I0110 02:24:05.141945  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:05.162731  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:05.162787  132852 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:05.162796  132852 oci.go:673] temporary error: container missing-upgrade-219545 status is  but expect it to be exited
	I0110 02:24:04.638104  132222 addons.go:530] duration metric: took 8.206472ms for enable addons: enabled=[]
	I0110 02:24:04.638228  132222 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:24:04.858285  132222 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:24:04.876771  132222 node_ready.go:35] waiting up to 6m0s for node "pause-576041" to be "Ready" ...
	I0110 02:24:07.528694  132222 node_ready.go:49] node "pause-576041" is "Ready"
	I0110 02:24:07.528719  132222 node_ready.go:38] duration metric: took 2.651921754s for node "pause-576041" to be "Ready" ...
	I0110 02:24:07.528733  132222 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:24:07.528789  132222 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:24:07.545116  132222 api_server.go:72] duration metric: took 2.915144702s to wait for apiserver process to appear ...
	I0110 02:24:07.545137  132222 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:24:07.545156  132222 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:24:07.564553  132222 api_server.go:325] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0110 02:24:07.564629  132222 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0110 02:24:08.045228  132222 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:24:08.054363  132222 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 02:24:08.054438  132222 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 02:24:08.546207  132222 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:24:08.554290  132222 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:24:08.555346  132222 api_server.go:141] control plane version: v1.35.0
	I0110 02:24:08.555389  132222 api_server.go:131] duration metric: took 1.010245255s to wait for apiserver health ...
	I0110 02:24:08.555398  132222 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:24:08.561287  132222 system_pods.go:59] 7 kube-system pods found
	I0110 02:24:08.561385  132222 system_pods.go:61] "coredns-7d764666f9-7zn9w" [a28f8265-e8a0-4238-820b-c5f2c9f0ebab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:24:08.561409  132222 system_pods.go:61] "etcd-pause-576041" [7863ae93-ee32-47b8-9116-c86b72984d72] Running
	I0110 02:24:08.561443  132222 system_pods.go:61] "kindnet-59pgj" [46732288-4236-4a2f-b2f5-bd8a52bf77f9] Running
	I0110 02:24:08.561468  132222 system_pods.go:61] "kube-apiserver-pause-576041" [3bc247ab-e27a-4946-adeb-e9cda426217e] Running
	I0110 02:24:08.561490  132222 system_pods.go:61] "kube-controller-manager-pause-576041" [3f2e82c9-6bad-4218-8f2d-36e08d86f432] Running
	I0110 02:24:08.561528  132222 system_pods.go:61] "kube-proxy-qndk4" [c970f294-9561-4cb9-9b89-51b8f125f19a] Running
	I0110 02:24:08.561553  132222 system_pods.go:61] "kube-scheduler-pause-576041" [59fe2dc7-7b6c-434d-9ab5-6decf26bed87] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:24:08.561573  132222 system_pods.go:74] duration metric: took 6.167938ms to wait for pod list to return data ...
	I0110 02:24:08.561612  132222 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:24:08.564302  132222 default_sa.go:45] found service account: "default"
	I0110 02:24:08.564334  132222 default_sa.go:55] duration metric: took 2.694887ms for default service account to be created ...
	I0110 02:24:08.564343  132222 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:24:08.567212  132222 system_pods.go:86] 7 kube-system pods found
	I0110 02:24:08.567281  132222 system_pods.go:89] "coredns-7d764666f9-7zn9w" [a28f8265-e8a0-4238-820b-c5f2c9f0ebab] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:24:08.567296  132222 system_pods.go:89] "etcd-pause-576041" [7863ae93-ee32-47b8-9116-c86b72984d72] Running
	I0110 02:24:08.567304  132222 system_pods.go:89] "kindnet-59pgj" [46732288-4236-4a2f-b2f5-bd8a52bf77f9] Running
	I0110 02:24:08.567309  132222 system_pods.go:89] "kube-apiserver-pause-576041" [3bc247ab-e27a-4946-adeb-e9cda426217e] Running
	I0110 02:24:08.567314  132222 system_pods.go:89] "kube-controller-manager-pause-576041" [3f2e82c9-6bad-4218-8f2d-36e08d86f432] Running
	I0110 02:24:08.567320  132222 system_pods.go:89] "kube-proxy-qndk4" [c970f294-9561-4cb9-9b89-51b8f125f19a] Running
	I0110 02:24:08.567328  132222 system_pods.go:89] "kube-scheduler-pause-576041" [59fe2dc7-7b6c-434d-9ab5-6decf26bed87] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:24:08.567338  132222 system_pods.go:126] duration metric: took 2.989576ms to wait for k8s-apps to be running ...
	I0110 02:24:08.567365  132222 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:24:08.567421  132222 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:24:08.587603  132222 system_svc.go:56] duration metric: took 20.229355ms WaitForService to wait for kubelet
	I0110 02:24:08.587640  132222 kubeadm.go:587] duration metric: took 3.957666925s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:24:08.587659  132222 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:24:08.591784  132222 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:24:08.591905  132222 node_conditions.go:123] node cpu capacity is 2
	I0110 02:24:08.591922  132222 node_conditions.go:105] duration metric: took 4.257911ms to run NodePressure ...
	I0110 02:24:08.591936  132222 start.go:242] waiting for startup goroutines ...
	I0110 02:24:08.591946  132222 start.go:247] waiting for cluster config update ...
	I0110 02:24:08.591958  132222 start.go:256] writing updated cluster config ...
	I0110 02:24:08.592250  132222 ssh_runner.go:195] Run: rm -f paused
	I0110 02:24:08.597956  132222 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:24:08.598577  132222 kapi.go:59] client config for pause-576041: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/client.crt", KeyFile:"/home/jenkins/minikube-integration/22414-2353/.minikube/profiles/pause-576041/client.key", CAFile:"/home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f7bf60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0110 02:24:08.602015  132222 pod_ready.go:83] waiting for pod "coredns-7d764666f9-7zn9w" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:07.038048  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:07.075191  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:07.075246  132852 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:07.075255  132852 oci.go:673] temporary error: container missing-upgrade-219545 status is  but expect it to be exited
	I0110 02:24:10.651955  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:10.675916  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:10.675974  132852 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:10.675983  132852 oci.go:673] temporary error: container missing-upgrade-219545 status is  but expect it to be exited
	I0110 02:24:10.676018  132852 retry.go:84] will retry after 2.6s: couldn't verify container is exited. %v: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	W0110 02:24:10.636705  132222 pod_ready.go:104] pod "coredns-7d764666f9-7zn9w" is not "Ready", error: <nil>
	W0110 02:24:13.106760  132222 pod_ready.go:104] pod "coredns-7d764666f9-7zn9w" is not "Ready", error: <nil>
	I0110 02:24:13.301882  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:13.320235  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:13.320297  132852 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:13.320319  132852 oci.go:673] temporary error: container missing-upgrade-219545 status is  but expect it to be exited
	W0110 02:24:15.108693  132222 pod_ready.go:104] pod "coredns-7d764666f9-7zn9w" is not "Ready", error: <nil>
	I0110 02:24:16.607719  132222 pod_ready.go:94] pod "coredns-7d764666f9-7zn9w" is "Ready"
	I0110 02:24:16.607742  132222 pod_ready.go:86] duration metric: took 8.005706677s for pod "coredns-7d764666f9-7zn9w" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:16.611370  132222 pod_ready.go:83] waiting for pod "etcd-pause-576041" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 02:24:18.616226  132222 pod_ready.go:104] pod "etcd-pause-576041" is not "Ready", error: <nil>
	I0110 02:24:20.284812  132852 cli_runner.go:164] Run: docker container inspect missing-upgrade-219545 --format={{.State.Status}}
	W0110 02:24:20.299563  132852 cli_runner.go:211] docker container inspect missing-upgrade-219545 --format={{.State.Status}} returned with exit code 1
	I0110 02:24:20.299664  132852 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	I0110 02:24:20.299673  132852 oci.go:673] temporary error: container missing-upgrade-219545 status is  but expect it to be exited
	I0110 02:24:20.299705  132852 oci.go:88] couldn't shut down missing-upgrade-219545 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-219545": docker container inspect missing-upgrade-219545 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-219545
	 
	I0110 02:24:20.299765  132852 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-219545
	I0110 02:24:20.313469  132852 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-219545
	W0110 02:24:20.328092  132852 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-219545 returned with exit code 1
	I0110 02:24:20.328178  132852 cli_runner.go:164] Run: docker network inspect missing-upgrade-219545 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:24:20.343650  132852 cli_runner.go:164] Run: docker network rm missing-upgrade-219545
	I0110 02:24:20.446086  132852 fix.go:124] Sleeping 1 second for extra luck!
	I0110 02:24:21.446248  132852 start.go:125] createHost starting for "" (driver="docker")
	I0110 02:24:21.449567  132852 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:24:21.449693  132852 start.go:159] libmachine.API.Create for "missing-upgrade-219545" (driver="docker")
	I0110 02:24:21.449726  132852 client.go:173] LocalClient.Create starting
	I0110 02:24:21.449812  132852 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem
	I0110 02:24:21.449846  132852 main.go:144] libmachine: Decoding PEM data...
	I0110 02:24:21.449862  132852 main.go:144] libmachine: Parsing certificate...
	I0110 02:24:21.449909  132852 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem
	I0110 02:24:21.449925  132852 main.go:144] libmachine: Decoding PEM data...
	I0110 02:24:21.449936  132852 main.go:144] libmachine: Parsing certificate...
	I0110 02:24:21.450218  132852 cli_runner.go:164] Run: docker network inspect missing-upgrade-219545 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:24:21.465601  132852 cli_runner.go:211] docker network inspect missing-upgrade-219545 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:24:21.465686  132852 network_create.go:284] running [docker network inspect missing-upgrade-219545] to gather additional debugging logs...
	I0110 02:24:21.465703  132852 cli_runner.go:164] Run: docker network inspect missing-upgrade-219545
	W0110 02:24:21.492185  132852 cli_runner.go:211] docker network inspect missing-upgrade-219545 returned with exit code 1
	I0110 02:24:21.492216  132852 network_create.go:287] error running [docker network inspect missing-upgrade-219545]: docker network inspect missing-upgrade-219545: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-219545 not found
	I0110 02:24:21.492229  132852 network_create.go:289] output of [docker network inspect missing-upgrade-219545]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-219545 not found
	
	** /stderr **
	I0110 02:24:21.492330  132852 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:24:21.508226  132852 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dba69832168e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:28:cf:b8:5c:6c} reservation:<nil>}
	I0110 02:24:21.508497  132852 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1bcca9209747 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d2:8b:83:a4:43:ae} reservation:<nil>}
	I0110 02:24:21.508843  132852 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-383ddfacc8f8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:26:ff:fb:97:66:af} reservation:<nil>}
	I0110 02:24:21.509156  132852 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0d0e90c6457f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:12:6e:77:5b:2f:89} reservation:<nil>}
	I0110 02:24:21.509543  132852 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ba1640}
	I0110 02:24:21.509570  132852 network_create.go:124] attempt to create docker network missing-upgrade-219545 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 02:24:21.509624  132852 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-219545 missing-upgrade-219545
	I0110 02:24:21.570786  132852 network_create.go:108] docker network missing-upgrade-219545 192.168.85.0/24 created
	I0110 02:24:21.570818  132852 kic.go:121] calculated static IP "192.168.85.2" for the "missing-upgrade-219545" container
	I0110 02:24:21.570896  132852 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:24:21.586573  132852 cli_runner.go:164] Run: docker volume create missing-upgrade-219545 --label name.minikube.sigs.k8s.io=missing-upgrade-219545 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:24:21.603343  132852 oci.go:103] Successfully created a docker volume missing-upgrade-219545
	I0110 02:24:21.603451  132852 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-219545-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-219545 --entrypoint /usr/bin/test -v missing-upgrade-219545:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	W0110 02:24:20.616570  132222 pod_ready.go:104] pod "etcd-pause-576041" is not "Ready", error: <nil>
	I0110 02:24:21.116788  132222 pod_ready.go:94] pod "etcd-pause-576041" is "Ready"
	I0110 02:24:21.116817  132222 pod_ready.go:86] duration metric: took 4.505416876s for pod "etcd-pause-576041" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:21.119012  132222 pod_ready.go:83] waiting for pod "kube-apiserver-pause-576041" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:21.123388  132222 pod_ready.go:94] pod "kube-apiserver-pause-576041" is "Ready"
	I0110 02:24:21.123421  132222 pod_ready.go:86] duration metric: took 4.383824ms for pod "kube-apiserver-pause-576041" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:21.125598  132222 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-576041" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:21.129694  132222 pod_ready.go:94] pod "kube-controller-manager-pause-576041" is "Ready"
	I0110 02:24:21.129722  132222 pod_ready.go:86] duration metric: took 4.096135ms for pod "kube-controller-manager-pause-576041" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:21.132068  132222 pod_ready.go:83] waiting for pod "kube-proxy-qndk4" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:21.315275  132222 pod_ready.go:94] pod "kube-proxy-qndk4" is "Ready"
	I0110 02:24:21.315304  132222 pod_ready.go:86] duration metric: took 183.210591ms for pod "kube-proxy-qndk4" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:21.515603  132222 pod_ready.go:83] waiting for pod "kube-scheduler-pause-576041" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:21.915425  132222 pod_ready.go:94] pod "kube-scheduler-pause-576041" is "Ready"
	I0110 02:24:21.915451  132222 pod_ready.go:86] duration metric: took 399.823351ms for pod "kube-scheduler-pause-576041" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:24:21.915464  132222 pod_ready.go:40] duration metric: took 13.317476874s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:24:21.997899  132222 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 02:24:22.001575  132222 out.go:203] 
	W0110 02:24:22.006321  132222 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 02:24:22.009187  132222 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:24:22.013207  132222 out.go:179] * Done! kubectl is now configured to use "pause-576041" cluster and "default" namespace by default
	I0110 02:24:21.991759  132852 oci.go:107] Successfully prepared a docker volume missing-upgrade-219545
	I0110 02:24:21.991907  132852 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I0110 02:24:21.991932  132852 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:24:21.991997  132852 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-219545:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.567723071Z" level=info msg="Created container 6f88fb6800748d0a3eda5e7f617d907ca7587329ec8a66c6c273b3598f0fb3c0: kube-system/kube-controller-manager-pause-576041/kube-controller-manager" id=6d99df54-be88-43dc-970b-a998b8308fe1 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.568352669Z" level=info msg="Starting container: 6f88fb6800748d0a3eda5e7f617d907ca7587329ec8a66c6c273b3598f0fb3c0" id=55f8c7b7-5f6f-4d4f-9026-b7eaee287008 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.571911462Z" level=info msg="Started container" PID=2407 containerID=6f88fb6800748d0a3eda5e7f617d907ca7587329ec8a66c6c273b3598f0fb3c0 description=kube-system/kube-controller-manager-pause-576041/kube-controller-manager id=55f8c7b7-5f6f-4d4f-9026-b7eaee287008 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6992482774c9a4b2272ebb2e4213be1409a94eeca0ef096def95f1aea4f11b64
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.58077391Z" level=info msg="Created container 72a694d48aaa573611a483f5e8a36822399e9343c974f91abcb5a9311c626be9: kube-system/etcd-pause-576041/etcd" id=2741f99a-f49e-413d-b4ca-e7169e430465 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.587204201Z" level=info msg="Starting container: 72a694d48aaa573611a483f5e8a36822399e9343c974f91abcb5a9311c626be9" id=30203f6d-b224-4a74-9177-592e43d920bc name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.593975974Z" level=info msg="Created container 9049ee99038eb2e2c04d5ec2cf9af20de8bb6e424007ac7baaff8efdbd81693a: kube-system/kube-apiserver-pause-576041/kube-apiserver" id=de8f60bd-7208-4a1a-8605-dd3473050ce4 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.595295525Z" level=info msg="Starting container: 9049ee99038eb2e2c04d5ec2cf9af20de8bb6e424007ac7baaff8efdbd81693a" id=bb217005-0280-4e5d-99e1-cd83ed34332b name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.616465929Z" level=info msg="Started container" PID=2416 containerID=9049ee99038eb2e2c04d5ec2cf9af20de8bb6e424007ac7baaff8efdbd81693a description=kube-system/kube-apiserver-pause-576041/kube-apiserver id=bb217005-0280-4e5d-99e1-cd83ed34332b name=/runtime.v1.RuntimeService/StartContainer sandboxID=71797c67dfe44dc08583d1b8c5ed5454a925eab113637243ed399eb51514067b
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.63076725Z" level=info msg="Started container" PID=2395 containerID=72a694d48aaa573611a483f5e8a36822399e9343c974f91abcb5a9311c626be9 description=kube-system/etcd-pause-576041/etcd id=30203f6d-b224-4a74-9177-592e43d920bc name=/runtime.v1.RuntimeService/StartContainer sandboxID=e306fe050d150ca3f74a1f7b3bd52864d015fa93dd4b97d1687fc722bcd48fd4
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.977807801Z" level=info msg="Created container 9562469ec4ee2cfb0e8e8a4bbf3cd62d2410380d89c3a36a5ba7417f9b81c48f: kube-system/kube-proxy-qndk4/kube-proxy" id=dc73b72f-fcb9-4dc8-98ca-6e2ee067adfb name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.978382352Z" level=info msg="Starting container: 9562469ec4ee2cfb0e8e8a4bbf3cd62d2410380d89c3a36a5ba7417f9b81c48f" id=939ad828-5e40-4b52-a1ea-d8da8af7fcbd name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:24:04 pause-576041 crio[2118]: time="2026-01-10T02:24:04.980782042Z" level=info msg="Started container" PID=2455 containerID=9562469ec4ee2cfb0e8e8a4bbf3cd62d2410380d89c3a36a5ba7417f9b81c48f description=kube-system/kube-proxy-qndk4/kube-proxy id=939ad828-5e40-4b52-a1ea-d8da8af7fcbd name=/runtime.v1.RuntimeService/StartContainer sandboxID=907937be1a57c6e0b8786a4ad4af141eb909420648315ae4d461d99e91798d35
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.902607831Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.903006641Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.906777712Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.906806946Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.910493703Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.91052512Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.914286764Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.914318222Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.918057794Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.918090983Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.918120406Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.92183666Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:24:14 pause-576041 crio[2118]: time="2026-01-10T02:24:14.921868528Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	9562469ec4ee2       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     25 seconds ago       Running             kube-proxy                1                   907937be1a57c       kube-proxy-qndk4                       kube-system
	9049ee99038eb       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     25 seconds ago       Running             kube-apiserver            1                   71797c67dfe44       kube-apiserver-pause-576041            kube-system
	72a694d48aaa5       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     25 seconds ago       Running             etcd                      1                   e306fe050d150       etcd-pause-576041                      kube-system
	6f88fb6800748       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     25 seconds ago       Running             kube-controller-manager   1                   6992482774c9a       kube-controller-manager-pause-576041   kube-system
	e0a65106d7f56       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     25 seconds ago       Running             coredns                   1                   02bdd497fe881       coredns-7d764666f9-7zn9w               kube-system
	a5a358693fec2       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                     25 seconds ago       Running             kindnet-cni               1                   1f414281b5c96       kindnet-59pgj                          kube-system
	37aba50ea1d04       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     25 seconds ago       Running             kube-scheduler            1                   a03151a273f24       kube-scheduler-pause-576041            kube-system
	40062b67b69e4       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                     38 seconds ago       Exited              coredns                   0                   02bdd497fe881       coredns-7d764666f9-7zn9w               kube-system
	514c3121cc53f       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   49 seconds ago       Exited              kindnet-cni               0                   1f414281b5c96       kindnet-59pgj                          kube-system
	0f01d6c0bb017       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     52 seconds ago       Exited              kube-proxy                0                   907937be1a57c       kube-proxy-qndk4                       kube-system
	ebf7d912141a5       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     About a minute ago   Exited              kube-scheduler            0                   a03151a273f24       kube-scheduler-pause-576041            kube-system
	c6ceb165227b8       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     About a minute ago   Exited              kube-controller-manager   0                   6992482774c9a       kube-controller-manager-pause-576041   kube-system
	9d164e8dba73d       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     About a minute ago   Exited              kube-apiserver            0                   71797c67dfe44       kube-apiserver-pause-576041            kube-system
	4ede82649e9df       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     About a minute ago   Exited              etcd                      0                   e306fe050d150       etcd-pause-576041                      kube-system
	
	
	==> coredns [40062b67b69e410587c91e52ab6543d0898a3e194407fc104c291b0fb61c1029] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:47102 - 14504 "HINFO IN 6167918437151035469.106571216106473206. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.029521102s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e0a65106d7f56945de0463d25dda3767b8da75a39e8bc887a16a89aa81a85dce] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:34108 - 30496 "HINFO IN 390612155315855389.3968940569599401408. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021915896s
	
	
	==> describe nodes <==
	Name:               pause-576041
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-576041
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=pause-576041
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_23_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:23:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-576041
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:24:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:24:13 +0000   Sat, 10 Jan 2026 02:23:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:24:13 +0000   Sat, 10 Jan 2026 02:23:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:24:13 +0000   Sat, 10 Jan 2026 02:23:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:24:13 +0000   Sat, 10 Jan 2026 02:23:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-576041
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                8d82fef8-7100-4f25-83c5-2910dfec2fa5
	  Boot ID:                    41f52236-76b9-4dc8-bfe3-121fbdeb9659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-7zn9w                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     53s
	  kube-system                 etcd-pause-576041                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         57s
	  kube-system                 kindnet-59pgj                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      53s
	  kube-system                 kube-apiserver-pause-576041             250m (12%)    0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-controller-manager-pause-576041    200m (10%)    0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-proxy-qndk4                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kube-scheduler-pause-576041             100m (5%)     0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  54s   node-controller  Node pause-576041 event: Registered Node pause-576041 in Controller
	  Normal  RegisteredNode  19s   node-controller  Node pause-576041 event: Registered Node pause-576041 in Controller
	
	
	==> dmesg <==
	[Jan10 02:04] overlayfs: idmapped layers are currently not supported
	[ +31.979586] hrtimer: interrupt took 16812983 ns
	[Jan10 02:05] overlayfs: idmapped layers are currently not supported
	[Jan10 02:06] overlayfs: idmapped layers are currently not supported
	[  +3.406975] overlayfs: idmapped layers are currently not supported
	[ +26.439263] overlayfs: idmapped layers are currently not supported
	[Jan10 02:07] overlayfs: idmapped layers are currently not supported
	[Jan10 02:08] overlayfs: idmapped layers are currently not supported
	[  +3.770589] overlayfs: idmapped layers are currently not supported
	[ +39.223525] overlayfs: idmapped layers are currently not supported
	[Jan10 02:09] overlayfs: idmapped layers are currently not supported
	[Jan10 02:10] overlayfs: idmapped layers are currently not supported
	[Jan10 02:14] overlayfs: idmapped layers are currently not supported
	[ +27.765975] overlayfs: idmapped layers are currently not supported
	[Jan10 02:15] overlayfs: idmapped layers are currently not supported
	[Jan10 02:16] overlayfs: idmapped layers are currently not supported
	[Jan10 02:17] overlayfs: idmapped layers are currently not supported
	[Jan10 02:18] overlayfs: idmapped layers are currently not supported
	[ +23.212409] overlayfs: idmapped layers are currently not supported
	[  +9.866169] overlayfs: idmapped layers are currently not supported
	[Jan10 02:19] overlayfs: idmapped layers are currently not supported
	[Jan10 02:20] overlayfs: idmapped layers are currently not supported
	[ +35.960240] overlayfs: idmapped layers are currently not supported
	[Jan10 02:21] overlayfs: idmapped layers are currently not supported
	[Jan10 02:23] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4ede82649e9dfe8ab749c2972b2df18977a0c58b6ed7e82937851b0d3af45904] <==
	{"level":"info","ts":"2026-01-10T02:23:24.039462Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:23:24.039619Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:23:24.040649Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:23:24.040828Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T02:23:24.040960Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T02:23:24.058050Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T02:23:24.077368Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:23:56.049098Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2026-01-10T02:23:56.049144Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-576041","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2026-01-10T02:23:56.049242Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2026-01-10T02:23:56.344909Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2026-01-10T02:23:56.345079Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-10T02:23:56.345143Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2026-01-10T02:23:56.345274Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2026-01-10T02:23:56.345323Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2026-01-10T02:23:56.345619Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2026-01-10T02:23:56.345766Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2026-01-10T02:23:56.345820Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2026-01-10T02:23:56.345736Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2026-01-10T02:23:56.345916Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2026-01-10T02:23:56.345959Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-10T02:23:56.348640Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2026-01-10T02:23:56.348731Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2026-01-10T02:23:56.348772Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:23:56.348793Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-576041","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [72a694d48aaa573611a483f5e8a36822399e9343c974f91abcb5a9311c626be9] <==
	{"level":"info","ts":"2026-01-10T02:24:04.878984Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:24:04.878993Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:24:04.879167Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:24:04.879183Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:24:04.880379Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T02:24:04.887905Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:24:04.888073Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T02:24:05.723005Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:24:05.723073Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:24:05.723126Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:24:05.723189Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:24:05.723217Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:24:05.724260Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:24:05.724282Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:24:05.724303Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:24:05.724311Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:24:05.726473Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-576041 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:24:05.726566Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:24:05.726759Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:24:05.726935Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:24:05.726984Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:24:05.727539Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:24:05.727644Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:24:05.729557Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:24:05.729815Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 02:24:30 up  1:06,  0 user,  load average: 3.46, 2.25, 1.86
	Linux pause-576041 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [514c3121cc53f0f15de5865e92e27b5e567ee5b237e801e40a76e9ea9900f34c] <==
	I0110 02:23:40.426267       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:23:40.427179       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 02:23:40.427343       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:23:40.427382       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:23:40.427418       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:23:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:23:40.628373       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:23:40.628456       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:23:40.628516       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:23:40.630210       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:23:40.828815       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:23:40.828841       1 metrics.go:72] Registering metrics
	I0110 02:23:40.828922       1 controller.go:711] "Syncing nftables rules"
	I0110 02:23:50.628917       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:23:50.629701       1 main.go:301] handling current node
	
	
	==> kindnet [a5a358693fec225e5b52304fae2ab85ed7798c520c95158237f942e7a9c43641] <==
	I0110 02:24:04.552717       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:24:04.552899       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 02:24:04.553009       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:24:04.553021       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:24:04.553032       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:24:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:24:04.897874       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:24:04.897984       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:24:04.898029       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:24:04.904682       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:24:07.704950       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:24:07.704991       1 metrics.go:72] Registering metrics
	I0110 02:24:07.705060       1 controller.go:711] "Syncing nftables rules"
	I0110 02:24:14.897784       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:24:14.897919       1 main.go:301] handling current node
	I0110 02:24:24.899945       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:24:24.899995       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9049ee99038eb2e2c04d5ec2cf9af20de8bb6e424007ac7baaff8efdbd81693a] <==
	I0110 02:24:07.600604       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0110 02:24:07.600945       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:07.601377       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 02:24:07.602112       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:07.602175       1 policy_source.go:248] refreshing policies
	I0110 02:24:07.602250       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:07.606099       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 02:24:07.610899       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:24:07.612976       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:07.613500       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0110 02:24:07.613647       1 aggregator.go:187] initial CRD sync complete...
	I0110 02:24:07.613665       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 02:24:07.613671       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:24:07.613676       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:24:07.626201       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 02:24:07.626582       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	E0110 02:24:07.640626       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 02:24:07.662294       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:24:07.693136       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 02:24:08.301737       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:24:09.504093       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:24:10.956258       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:24:11.055980       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:24:11.109341       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:24:11.208046       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [9d164e8dba73d71d84e6a3b2c1f639ddd54bf56a4385bd912b51c205d7ab8f5b] <==
	W0110 02:23:56.082525       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082554       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082582       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082612       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082640       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082668       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082697       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082727       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082757       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082789       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082818       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082847       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082876       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082906       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.082939       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.083100       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.083133       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.083163       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.083193       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.083223       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.083251       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.083281       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.083332       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0110 02:23:56.083360       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [6f88fb6800748d0a3eda5e7f617d907ca7587329ec8a66c6c273b3598f0fb3c0] <==
	I0110 02:24:10.708914       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.708964       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.709177       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.709248       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.710367       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.710564       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.710624       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.716696       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.716991       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:24:10.717007       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.717015       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.717514       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.717905       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.718302       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.718634       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.718781       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.718789       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.719034       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.719045       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.719052       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.751047       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.806836       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:10.806862       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:24:10.806874       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:24:10.823307       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [c6ceb165227b8c34eed391337502d5048d2059ef3f638d21a983f6e2b529db5e] <==
	I0110 02:23:35.691552       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.691722       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.691786       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.691915       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.691971       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.692527       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.702154       1 range_allocator.go:433] "Set node PodCIDR" node="pause-576041" podCIDRs=["10.244.0.0/24"]
	I0110 02:23:35.718373       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:23:35.737771       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.753958       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.773840       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.773926       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.774051       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 02:23:35.774101       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.774218       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.774626       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.775072       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-576041"
	I0110 02:23:35.775149       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0110 02:23:35.775177       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.798734       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.802961       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:35.802990       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:23:35.802997       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:23:35.828352       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:50.778358       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [0f01d6c0bb017febe4c27c013b7cfbaf56ed73588af6eb8be8c3a3e977a77b4a] <==
	I0110 02:23:37.738984       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:23:37.833487       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:23:37.934228       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:37.934314       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 02:23:37.934405       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:23:38.104765       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:23:38.104821       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:23:38.195839       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:23:38.196171       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:23:38.196184       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:23:38.253626       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:23:38.253982       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:23:38.258574       1 config.go:200] "Starting service config controller"
	I0110 02:23:38.259982       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:23:38.260375       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:23:38.260383       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:23:38.260885       1 config.go:309] "Starting node config controller"
	I0110 02:23:38.260902       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:23:38.260909       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:23:38.358315       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 02:23:38.360566       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:23:38.360592       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [9562469ec4ee2cfb0e8e8a4bbf3cd62d2410380d89c3a36a5ba7417f9b81c48f] <==
	I0110 02:24:05.504225       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:24:05.585432       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:24:07.690123       1 shared_informer.go:377] "Caches are synced"
	I0110 02:24:07.690290       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 02:24:07.690405       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:24:07.744859       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:24:07.744976       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:24:07.754090       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:24:07.754681       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:24:07.754754       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:24:07.758198       1 config.go:200] "Starting service config controller"
	I0110 02:24:07.758285       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:24:07.758335       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:24:07.758365       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:24:07.758404       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:24:07.758429       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:24:07.760499       1 config.go:309] "Starting node config controller"
	I0110 02:24:07.763442       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:24:07.763545       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:24:07.860676       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:24:07.860785       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 02:24:07.860850       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [37aba50ea1d04a32d0842683b6ff0546c6732da93f939a34df3511ba18add8da] <==
	I0110 02:24:05.632010       1 serving.go:386] Generated self-signed cert in-memory
	W0110 02:24:07.521969       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 02:24:07.522084       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 02:24:07.522096       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 02:24:07.522105       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 02:24:07.589144       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 02:24:07.589181       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:24:07.599269       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 02:24:07.599418       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:24:07.599435       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:24:07.599450       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 02:24:07.699714       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [ebf7d912141a5fa637ea2b82d2ad6f30d9bc65d7d0ba7b5366b611b45ff2fcf3] <==
	E0110 02:23:28.086747       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 02:23:28.086796       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 02:23:28.086843       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 02:23:28.876435       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 02:23:28.917739       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 02:23:28.919789       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 02:23:28.964534       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 02:23:28.986885       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 02:23:29.001002       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 02:23:29.025318       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 02:23:29.088782       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 02:23:29.129868       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E0110 02:23:29.258682       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 02:23:29.258855       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 02:23:29.260007       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 02:23:29.281453       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 02:23:29.291891       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 02:23:29.389028       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 02:23:29.410813       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	I0110 02:23:32.249071       1 shared_informer.go:377] "Caches are synced"
	I0110 02:23:56.084672       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0110 02:23:56.088988       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0110 02:23:56.089233       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0110 02:23:56.089831       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0110 02:23:56.090120       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jan 10 02:24:07 pause-576041 kubelet[1322]: E0110 02:24:07.542282    1322 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-proxy-qndk4\" is forbidden: User \"system:node:pause-576041\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-576041' and this object" podUID="c970f294-9561-4cb9-9b89-51b8f125f19a" pod="kube-system/kube-proxy-qndk4"
	Jan 10 02:24:07 pause-576041 kubelet[1322]: E0110 02:24:07.562390    1322 status_manager.go:1045] "Failed to get status for pod" err="pods \"kindnet-59pgj\" is forbidden: User \"system:node:pause-576041\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-576041' and this object" podUID="46732288-4236-4a2f-b2f5-bd8a52bf77f9" pod="kube-system/kindnet-59pgj"
	Jan 10 02:24:07 pause-576041 kubelet[1322]: E0110 02:24:07.571243    1322 status_manager.go:1045] "Failed to get status for pod" err="pods \"coredns-7d764666f9-7zn9w\" is forbidden: User \"system:node:pause-576041\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-576041' and this object" podUID="a28f8265-e8a0-4238-820b-c5f2c9f0ebab" pod="kube-system/coredns-7d764666f9-7zn9w"
	Jan 10 02:24:07 pause-576041 kubelet[1322]: E0110 02:24:07.580500    1322 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-scheduler-pause-576041\" is forbidden: User \"system:node:pause-576041\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-576041' and this object" podUID="5a769ecb9185aeb6fbadab33b29b8ba1" pod="kube-system/kube-scheduler-pause-576041"
	Jan 10 02:24:07 pause-576041 kubelet[1322]: E0110 02:24:07.585834    1322 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-pause-576041\" is forbidden: User \"system:node:pause-576041\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-576041' and this object" podUID="7e9a2688d0b2d3262e09e1a6d40e3885" pod="kube-system/etcd-pause-576041"
	Jan 10 02:24:07 pause-576041 kubelet[1322]: E0110 02:24:07.591032    1322 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-pause-576041\" is forbidden: User \"system:node:pause-576041\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-576041' and this object" podUID="d834128d1a7733bcd7019344856fedce" pod="kube-system/kube-apiserver-pause-576041"
	Jan 10 02:24:07 pause-576041 kubelet[1322]: E0110 02:24:07.592089    1322 status_manager.go:1045] "Failed to get status for pod" err="pods \"etcd-pause-576041\" is forbidden: User \"system:node:pause-576041\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-576041' and this object" podUID="7e9a2688d0b2d3262e09e1a6d40e3885" pod="kube-system/etcd-pause-576041"
	Jan 10 02:24:07 pause-576041 kubelet[1322]: E0110 02:24:07.593200    1322 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-apiserver-pause-576041\" is forbidden: User \"system:node:pause-576041\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-576041' and this object" podUID="d834128d1a7733bcd7019344856fedce" pod="kube-system/kube-apiserver-pause-576041"
	Jan 10 02:24:07 pause-576041 kubelet[1322]: E0110 02:24:07.598510    1322 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-576041\" is forbidden: User \"system:node:pause-576041\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-576041' and this object" podUID="08f781c18c5c021e670af99d45edae77" pod="kube-system/kube-controller-manager-pause-576041"
	Jan 10 02:24:08 pause-576041 kubelet[1322]: E0110 02:24:08.156681    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-576041" containerName="kube-controller-manager"
	Jan 10 02:24:10 pause-576041 kubelet[1322]: E0110 02:24:10.804300    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-576041" containerName="etcd"
	Jan 10 02:24:12 pause-576041 kubelet[1322]: W0110 02:24:12.264152    1322 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Jan 10 02:24:14 pause-576041 kubelet[1322]: E0110 02:24:14.267202    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-576041" containerName="kube-scheduler"
	Jan 10 02:24:14 pause-576041 kubelet[1322]: E0110 02:24:14.463984    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-576041" containerName="kube-scheduler"
	Jan 10 02:24:14 pause-576041 kubelet[1322]: E0110 02:24:14.933483    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-576041" containerName="kube-apiserver"
	Jan 10 02:24:15 pause-576041 kubelet[1322]: E0110 02:24:15.466785    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-576041" containerName="kube-apiserver"
	Jan 10 02:24:15 pause-576041 kubelet[1322]: E0110 02:24:15.467951    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-576041" containerName="kube-scheduler"
	Jan 10 02:24:16 pause-576041 kubelet[1322]: E0110 02:24:16.449004    1322 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-7zn9w" containerName="coredns"
	Jan 10 02:24:16 pause-576041 kubelet[1322]: E0110 02:24:16.469268    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-576041" containerName="kube-apiserver"
	Jan 10 02:24:18 pause-576041 kubelet[1322]: E0110 02:24:18.164662    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-576041" containerName="kube-controller-manager"
	Jan 10 02:24:20 pause-576041 kubelet[1322]: E0110 02:24:20.805015    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-576041" containerName="etcd"
	Jan 10 02:24:21 pause-576041 kubelet[1322]: E0110 02:24:21.481794    1322 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-576041" containerName="etcd"
	Jan 10 02:24:22 pause-576041 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:24:22 pause-576041 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:24:22 pause-576041 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-576041 -n pause-576041
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-576041 -n pause-576041: exit status 2 (446.27733ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-576041 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (9.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-736081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-736081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (301.918419ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:42:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-736081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-736081 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-736081 describe deploy/metrics-server -n kube-system: exit status 1 (88.153956ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-736081 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-736081
helpers_test.go:244: (dbg) docker inspect old-k8s-version-736081:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd",
	        "Created": "2026-01-10T02:41:32.674196479Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196771,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:41:32.733132219Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd/hosts",
	        "LogPath": "/var/lib/docker/containers/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd-json.log",
	        "Name": "/old-k8s-version-736081",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-736081:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-736081",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd",
	                "LowerDir": "/var/lib/docker/overlay2/900fe6e640aea64a7d338ba444221d388c553a58a464e856a98199bafb2b388e-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/900fe6e640aea64a7d338ba444221d388c553a58a464e856a98199bafb2b388e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/900fe6e640aea64a7d338ba444221d388c553a58a464e856a98199bafb2b388e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/900fe6e640aea64a7d338ba444221d388c553a58a464e856a98199bafb2b388e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-736081",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-736081/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-736081",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-736081",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-736081",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "024813a072e80591fe80110afd102fbba524ccd50143a30b8297e5f4d722af66",
	            "SandboxKey": "/var/run/docker/netns/024813a072e8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-736081": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:89:27:32:3e:78",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "19dbfb1518ac950f9693694cb0229451b62340819974a22c4a52e8192582b225",
	                    "EndpointID": "ff36ad67bdbab27f804ec855c1e172f1a405bdec8ed68eec07463a669139b929",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-736081",
	                        "a4844cb5bc1c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-736081 -n old-k8s-version-736081
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-736081 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-736081 logs -n 25: (1.125601953s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-989144 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo containerd config dump                                                                                                                                                                                                  │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo crio config                                                                                                                                                                                                             │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ delete  │ -p cilium-989144                                                                                                                                                                                                                              │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │ 10 Jan 26 02:36 UTC │
	│ start   │ -p cert-expiration-213257 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │ 10 Jan 26 02:37 UTC │
	│ start   │ -p cert-expiration-213257 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ delete  │ -p cert-expiration-213257                                                                                                                                                                                                                     │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ start   │ -p force-systemd-flag-038359 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-038359 │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │                     │
	│ delete  │ -p force-systemd-env-088457                                                                                                                                                                                                                   │ force-systemd-env-088457  │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ start   │ -p cert-options-295914 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:41 UTC │
	│ ssh     │ cert-options-295914 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ ssh     │ -p cert-options-295914 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ delete  │ -p cert-options-295914                                                                                                                                                                                                                        │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ start   │ -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:42 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-736081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:41:26
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:41:26.946554  196335 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:41:26.946732  196335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:41:26.946757  196335 out.go:374] Setting ErrFile to fd 2...
	I0110 02:41:26.946782  196335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:41:26.947201  196335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:41:26.947773  196335 out.go:368] Setting JSON to false
	I0110 02:41:26.948800  196335 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5036,"bootTime":1768007851,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:41:26.949051  196335 start.go:143] virtualization:  
	I0110 02:41:26.952734  196335 out.go:179] * [old-k8s-version-736081] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:41:26.957194  196335 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:41:26.957255  196335 notify.go:221] Checking for updates...
	I0110 02:41:26.963718  196335 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:41:26.966975  196335 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:41:26.970080  196335 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:41:26.973176  196335 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:41:26.976239  196335 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:41:26.979747  196335 config.go:182] Loaded profile config "force-systemd-flag-038359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:41:26.979899  196335 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:41:27.013459  196335 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:41:27.013606  196335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:41:27.067767  196335 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:41:27.058130458 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:41:27.067936  196335 docker.go:319] overlay module found
	I0110 02:41:27.071256  196335 out.go:179] * Using the docker driver based on user configuration
	I0110 02:41:27.074238  196335 start.go:309] selected driver: docker
	I0110 02:41:27.074257  196335 start.go:928] validating driver "docker" against <nil>
	I0110 02:41:27.074270  196335 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:41:27.074988  196335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:41:27.137133  196335 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:41:27.128050574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:41:27.137282  196335 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 02:41:27.137506  196335 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:41:27.140541  196335 out.go:179] * Using Docker driver with root privileges
	I0110 02:41:27.143574  196335 cni.go:84] Creating CNI manager for ""
	I0110 02:41:27.143642  196335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:41:27.143656  196335 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 02:41:27.143738  196335 start.go:353] cluster config:
	{Name:old-k8s-version-736081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-736081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:41:27.146890  196335 out.go:179] * Starting "old-k8s-version-736081" primary control-plane node in "old-k8s-version-736081" cluster
	I0110 02:41:27.149835  196335 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:41:27.152735  196335 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:41:27.155729  196335 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 02:41:27.155777  196335 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0110 02:41:27.155829  196335 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:41:27.155889  196335 cache.go:65] Caching tarball of preloaded images
	I0110 02:41:27.155971  196335 preload.go:251] Found /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 02:41:27.155981  196335 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0110 02:41:27.156088  196335 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/config.json ...
	I0110 02:41:27.156114  196335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/config.json: {Name:mk98b5cd23b7a45317c774f4465d596dcdfb371c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:41:27.183199  196335 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:41:27.183221  196335 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:41:27.183242  196335 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:41:27.183282  196335 start.go:360] acquireMachinesLock for old-k8s-version-736081: {Name:mk5c17d262a96ce13234dbad01b409b9bd033454 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:41:27.183390  196335 start.go:364] duration metric: took 88.473µs to acquireMachinesLock for "old-k8s-version-736081"
	I0110 02:41:27.183419  196335 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-736081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-736081 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:41:27.183486  196335 start.go:125] createHost starting for "" (driver="docker")
	I0110 02:41:27.189194  196335 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:41:27.189435  196335 start.go:159] libmachine.API.Create for "old-k8s-version-736081" (driver="docker")
	I0110 02:41:27.189476  196335 client.go:173] LocalClient.Create starting
	I0110 02:41:27.189541  196335 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem
	I0110 02:41:27.189576  196335 main.go:144] libmachine: Decoding PEM data...
	I0110 02:41:27.189599  196335 main.go:144] libmachine: Parsing certificate...
	I0110 02:41:27.189658  196335 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem
	I0110 02:41:27.189681  196335 main.go:144] libmachine: Decoding PEM data...
	I0110 02:41:27.189696  196335 main.go:144] libmachine: Parsing certificate...
	I0110 02:41:27.190080  196335 cli_runner.go:164] Run: docker network inspect old-k8s-version-736081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:41:27.209173  196335 cli_runner.go:211] docker network inspect old-k8s-version-736081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:41:27.209258  196335 network_create.go:284] running [docker network inspect old-k8s-version-736081] to gather additional debugging logs...
	I0110 02:41:27.209275  196335 cli_runner.go:164] Run: docker network inspect old-k8s-version-736081
	W0110 02:41:27.225781  196335 cli_runner.go:211] docker network inspect old-k8s-version-736081 returned with exit code 1
	I0110 02:41:27.225807  196335 network_create.go:287] error running [docker network inspect old-k8s-version-736081]: docker network inspect old-k8s-version-736081: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-736081 not found
	I0110 02:41:27.225819  196335 network_create.go:289] output of [docker network inspect old-k8s-version-736081]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-736081 not found
	
	** /stderr **
	I0110 02:41:27.225925  196335 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:41:27.244391  196335 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dba69832168e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:28:cf:b8:5c:6c} reservation:<nil>}
	I0110 02:41:27.244715  196335 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1bcca9209747 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d2:8b:83:a4:43:ae} reservation:<nil>}
	I0110 02:41:27.245020  196335 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-383ddfacc8f8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:26:ff:fb:97:66:af} reservation:<nil>}
	I0110 02:41:27.245414  196335 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a02b20}
	I0110 02:41:27.245437  196335 network_create.go:124] attempt to create docker network old-k8s-version-736081 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0110 02:41:27.245488  196335 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-736081 old-k8s-version-736081
	I0110 02:41:27.304932  196335 network_create.go:108] docker network old-k8s-version-736081 192.168.76.0/24 created
	I0110 02:41:27.304980  196335 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-736081" container
	I0110 02:41:27.305054  196335 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:41:27.320715  196335 cli_runner.go:164] Run: docker volume create old-k8s-version-736081 --label name.minikube.sigs.k8s.io=old-k8s-version-736081 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:41:27.336687  196335 oci.go:103] Successfully created a docker volume old-k8s-version-736081
	I0110 02:41:27.336777  196335 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-736081-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-736081 --entrypoint /usr/bin/test -v old-k8s-version-736081:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:41:27.845894  196335 oci.go:107] Successfully prepared a docker volume old-k8s-version-736081
	I0110 02:41:27.845961  196335 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 02:41:27.845976  196335 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:41:27.846060  196335 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-736081:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:41:32.608383  196335 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-736081:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (4.762283276s)
	I0110 02:41:32.608429  196335 kic.go:203] duration metric: took 4.762449007s to extract preloaded images to volume ...
	W0110 02:41:32.608560  196335 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 02:41:32.608662  196335 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:41:32.661418  196335 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-736081 --name old-k8s-version-736081 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-736081 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-736081 --network old-k8s-version-736081 --ip 192.168.76.2 --volume old-k8s-version-736081:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 02:41:32.935326  196335 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Running}}
	I0110 02:41:32.954769  196335 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Status}}
	I0110 02:41:32.981372  196335 cli_runner.go:164] Run: docker exec old-k8s-version-736081 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:41:33.034832  196335 oci.go:144] the created container "old-k8s-version-736081" has a running status.
	I0110 02:41:33.034859  196335 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa...
	I0110 02:41:33.500364  196335 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:41:33.532385  196335 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Status}}
	I0110 02:41:33.554911  196335 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:41:33.554930  196335 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-736081 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:41:33.595652  196335 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Status}}
	I0110 02:41:33.612968  196335 machine.go:94] provisionDockerMachine start ...
	I0110 02:41:33.613048  196335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:41:33.629191  196335 main.go:144] libmachine: Using SSH client type: native
	I0110 02:41:33.629526  196335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I0110 02:41:33.629548  196335 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:41:33.630210  196335 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46180->127.0.0.1:33048: read: connection reset by peer
	I0110 02:41:36.779531  196335 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-736081
	
	I0110 02:41:36.779557  196335 ubuntu.go:182] provisioning hostname "old-k8s-version-736081"
	I0110 02:41:36.779629  196335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:41:36.797062  196335 main.go:144] libmachine: Using SSH client type: native
	I0110 02:41:36.797370  196335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I0110 02:41:36.797387  196335 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-736081 && echo "old-k8s-version-736081" | sudo tee /etc/hostname
	I0110 02:41:36.956904  196335 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-736081
	
	I0110 02:41:36.957054  196335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:41:36.985030  196335 main.go:144] libmachine: Using SSH client type: native
	I0110 02:41:36.985338  196335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I0110 02:41:36.985375  196335 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-736081' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-736081/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-736081' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:41:37.139919  196335 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:41:37.139947  196335 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 02:41:37.139966  196335 ubuntu.go:190] setting up certificates
	I0110 02:41:37.139976  196335 provision.go:84] configureAuth start
	I0110 02:41:37.140038  196335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-736081
	I0110 02:41:37.157315  196335 provision.go:143] copyHostCerts
	I0110 02:41:37.157384  196335 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem, removing ...
	I0110 02:41:37.157399  196335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:41:37.157486  196335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 02:41:37.157586  196335 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem, removing ...
	I0110 02:41:37.157596  196335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:41:37.157623  196335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 02:41:37.157682  196335 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem, removing ...
	I0110 02:41:37.157690  196335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:41:37.157713  196335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 02:41:37.157763  196335 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-736081 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-736081]
	I0110 02:41:37.284836  196335 provision.go:177] copyRemoteCerts
	I0110 02:41:37.284902  196335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:41:37.284949  196335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:41:37.302762  196335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:41:37.407518  196335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:41:37.424501  196335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0110 02:41:37.441834  196335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 02:41:37.458462  196335 provision.go:87] duration metric: took 318.472793ms to configureAuth
	I0110 02:41:37.458487  196335 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:41:37.458678  196335 config.go:182] Loaded profile config "old-k8s-version-736081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 02:41:37.458786  196335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:41:37.476328  196335 main.go:144] libmachine: Using SSH client type: native
	I0110 02:41:37.476637  196335 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33048 <nil> <nil>}
	I0110 02:41:37.476658  196335 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:41:37.797512  196335 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:41:37.797539  196335 machine.go:97] duration metric: took 4.184552409s to provisionDockerMachine
	I0110 02:41:37.797551  196335 client.go:176] duration metric: took 10.608063655s to LocalClient.Create
	I0110 02:41:37.797565  196335 start.go:167] duration metric: took 10.608131739s to libmachine.API.Create "old-k8s-version-736081"
	I0110 02:41:37.797572  196335 start.go:293] postStartSetup for "old-k8s-version-736081" (driver="docker")
	I0110 02:41:37.797582  196335 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:41:37.797646  196335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:41:37.797701  196335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:41:37.815785  196335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:41:37.920577  196335 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:41:37.923709  196335 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:41:37.923776  196335 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:41:37.923809  196335 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:41:37.923862  196335 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:41:37.923951  196335 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:41:37.924075  196335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:41:37.931546  196335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:41:37.949092  196335 start.go:296] duration metric: took 151.505321ms for postStartSetup
	I0110 02:41:37.949495  196335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-736081
	I0110 02:41:37.966783  196335 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/config.json ...
	I0110 02:41:37.967132  196335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:41:37.967205  196335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:41:37.984221  196335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:41:38.090362  196335 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:41:38.095741  196335 start.go:128] duration metric: took 10.912240517s to createHost
	I0110 02:41:38.095771  196335 start.go:83] releasing machines lock for "old-k8s-version-736081", held for 10.912365707s
	I0110 02:41:38.095865  196335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-736081
	I0110 02:41:38.116373  196335 ssh_runner.go:195] Run: cat /version.json
	I0110 02:41:38.116433  196335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:41:38.116529  196335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:41:38.116588  196335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:41:38.141166  196335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:41:38.153466  196335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:41:38.345755  196335 ssh_runner.go:195] Run: systemctl --version
	I0110 02:41:38.352182  196335 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:41:38.390016  196335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:41:38.394096  196335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:41:38.394173  196335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:41:38.419768  196335 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 02:41:38.419889  196335 start.go:496] detecting cgroup driver to use...
	I0110 02:41:38.419928  196335 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:41:38.420007  196335 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:41:38.440774  196335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:41:38.456596  196335 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:41:38.456715  196335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:41:38.475416  196335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:41:38.496457  196335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:41:38.609277  196335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:41:38.732644  196335 docker.go:234] disabling docker service ...
	I0110 02:41:38.732757  196335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:41:38.754698  196335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:41:38.767271  196335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:41:38.903944  196335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:41:39.027006  196335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:41:39.039697  196335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:41:39.054427  196335 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0110 02:41:39.054546  196335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:41:39.063837  196335 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:41:39.063959  196335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:41:39.073310  196335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:41:39.083235  196335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:41:39.092404  196335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:41:39.100292  196335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:41:39.108749  196335 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:41:39.121612  196335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:41:39.130263  196335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:41:39.137587  196335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:41:39.144793  196335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:41:39.279778  196335 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:41:39.453565  196335 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:41:39.453675  196335 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:41:39.457398  196335 start.go:574] Will wait 60s for crictl version
	I0110 02:41:39.457502  196335 ssh_runner.go:195] Run: which crictl
	I0110 02:41:39.460944  196335 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:41:39.486145  196335 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:41:39.486227  196335 ssh_runner.go:195] Run: crio --version
	I0110 02:41:39.517209  196335 ssh_runner.go:195] Run: crio --version
	I0110 02:41:39.548771  196335 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.35.0 ...
	I0110 02:41:39.550029  196335 cli_runner.go:164] Run: docker network inspect old-k8s-version-736081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:41:39.566244  196335 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:41:39.569887  196335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:41:39.579681  196335 kubeadm.go:884] updating cluster {Name:old-k8s-version-736081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-736081 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:41:39.579827  196335 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 02:41:39.579888  196335 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:41:39.622658  196335 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:41:39.622682  196335 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:41:39.622736  196335 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:41:39.648655  196335 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:41:39.648681  196335 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:41:39.648689  196335 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I0110 02:41:39.648773  196335 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-736081 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-736081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:41:39.648855  196335 ssh_runner.go:195] Run: crio config
	I0110 02:41:39.701907  196335 cni.go:84] Creating CNI manager for ""
	I0110 02:41:39.701932  196335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:41:39.701947  196335 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:41:39.701969  196335 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-736081 NodeName:old-k8s-version-736081 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:41:39.702119  196335 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-736081"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:41:39.702192  196335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0110 02:41:39.709815  196335 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:41:39.709887  196335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:41:39.717223  196335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0110 02:41:39.730554  196335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:41:39.743377  196335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0110 02:41:39.756964  196335 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:41:39.760449  196335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:41:39.769877  196335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:41:39.884099  196335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:41:39.900541  196335 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081 for IP: 192.168.76.2
	I0110 02:41:39.900560  196335 certs.go:195] generating shared ca certs ...
	I0110 02:41:39.900577  196335 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:41:39.900713  196335 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:41:39.900763  196335 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:41:39.900774  196335 certs.go:257] generating profile certs ...
	I0110 02:41:39.900832  196335 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.key
	I0110 02:41:39.900849  196335 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt with IP's: []
	I0110 02:41:40.049653  196335 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt ...
	I0110 02:41:40.049699  196335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt: {Name:mk0d451eb49de977fa5ab4256d7dea3c7e12abf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:41:40.049923  196335 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.key ...
	I0110 02:41:40.049939  196335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.key: {Name:mk6e7c3433e5d960ad479ac23c308a0db655f0b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:41:40.050051  196335 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/apiserver.key.ee08958c
	I0110 02:41:40.050068  196335 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/apiserver.crt.ee08958c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 02:41:40.183981  196335 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/apiserver.crt.ee08958c ...
	I0110 02:41:40.184013  196335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/apiserver.crt.ee08958c: {Name:mka7c45c843665c050c94a427292f3cfa2505351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:41:40.184202  196335 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/apiserver.key.ee08958c ...
	I0110 02:41:40.184226  196335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/apiserver.key.ee08958c: {Name:mk85cc241ab4007ff655a58f4e36d41a0216129f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:41:40.184312  196335 certs.go:382] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/apiserver.crt.ee08958c -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/apiserver.crt
	I0110 02:41:40.184394  196335 certs.go:386] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/apiserver.key.ee08958c -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/apiserver.key
	I0110 02:41:40.184451  196335 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/proxy-client.key
	I0110 02:41:40.184470  196335 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/proxy-client.crt with IP's: []
	I0110 02:41:40.391311  196335 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/proxy-client.crt ...
	I0110 02:41:40.391340  196335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/proxy-client.crt: {Name:mke95e14ca57e0293797888c41c1280d9c824bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:41:40.391518  196335 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/proxy-client.key ...
	I0110 02:41:40.391532  196335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/proxy-client.key: {Name:mkc91ad71a01cedba2280cee50c75e368bd58555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:41:40.391718  196335 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:41:40.391763  196335 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:41:40.391778  196335 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:41:40.391828  196335 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:41:40.391856  196335 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:41:40.391891  196335 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:41:40.391947  196335 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:41:40.392804  196335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:41:40.413131  196335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:41:40.430635  196335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:41:40.447568  196335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:41:40.465312  196335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0110 02:41:40.483719  196335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:41:40.501010  196335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:41:40.518102  196335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:41:40.535102  196335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:41:40.552250  196335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:41:40.569050  196335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:41:40.586265  196335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:41:40.598634  196335 ssh_runner.go:195] Run: openssl version
	I0110 02:41:40.604859  196335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:41:40.612063  196335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:41:40.619263  196335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:41:40.622805  196335 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:41:40.622895  196335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:41:40.665431  196335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:41:40.673089  196335 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:41:40.686090  196335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:41:40.695286  196335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:41:40.706583  196335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:41:40.711564  196335 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:41:40.711699  196335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:41:40.759712  196335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:41:40.768091  196335 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4168.pem /etc/ssl/certs/51391683.0
	I0110 02:41:40.786749  196335 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:41:40.794741  196335 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:41:40.803207  196335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:41:40.807839  196335 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:41:40.807960  196335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:41:40.851593  196335 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:41:40.859318  196335 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41682.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:41:40.866996  196335 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:41:40.870804  196335 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:41:40.870886  196335 kubeadm.go:401] StartCluster: {Name:old-k8s-version-736081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-736081 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:41:40.870975  196335 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:41:40.871040  196335 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:41:40.899455  196335 cri.go:96] found id: ""
	I0110 02:41:40.899535  196335 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:41:40.907653  196335 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:41:40.915728  196335 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:41:40.915866  196335 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:41:40.923892  196335 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:41:40.923914  196335 kubeadm.go:158] found existing configuration files:
	
	I0110 02:41:40.924004  196335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:41:40.931627  196335 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:41:40.931702  196335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:41:40.939111  196335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:41:40.946778  196335 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:41:40.946855  196335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:41:40.954135  196335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:41:40.961984  196335 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:41:40.962104  196335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:41:40.969580  196335 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:41:40.977470  196335 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:41:40.977557  196335 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:41:40.985030  196335 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:41:41.030440  196335 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I0110 02:41:41.030725  196335 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:41:41.069326  196335 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:41:41.069442  196335 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:41:41.069505  196335 kubeadm.go:319] OS: Linux
	I0110 02:41:41.069569  196335 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:41:41.069666  196335 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:41:41.069747  196335 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:41:41.069825  196335 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:41:41.069902  196335 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:41:41.069989  196335 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:41:41.070073  196335 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:41:41.070164  196335 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:41:41.070236  196335 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:41:41.149529  196335 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:41:41.149711  196335 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:41:41.149839  196335 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0110 02:41:41.306276  196335 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:41:41.308734  196335 out.go:252]   - Generating certificates and keys ...
	I0110 02:41:41.308876  196335 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:41:41.308980  196335 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:41:42.080531  196335 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:41:42.926990  196335 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:41:44.329900  196335 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 02:41:45.332267  196335 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 02:41:46.192079  196335 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 02:41:46.192635  196335 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-736081] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:41:47.177752  196335 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 02:41:47.178101  196335 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-736081] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:41:47.548790  196335 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 02:41:48.131332  196335 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 02:41:49.004831  196335 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 02:41:49.005123  196335 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:41:50.158372  196335 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:41:50.408846  196335 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:41:50.679858  196335 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:41:51.748528  196335 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:41:51.749184  196335 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:41:51.752167  196335 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:41:51.753671  196335 out.go:252]   - Booting up control plane ...
	I0110 02:41:51.753761  196335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:41:51.753839  196335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:41:51.754937  196335 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:41:51.771287  196335 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:41:51.772424  196335 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:41:51.772735  196335 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:41:51.907958  196335 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0110 02:41:58.911640  196335 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.004134 seconds
	I0110 02:41:58.911762  196335 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 02:41:58.924668  196335 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 02:41:59.452162  196335 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 02:41:59.452589  196335 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-736081 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 02:41:59.964569  196335 kubeadm.go:319] [bootstrap-token] Using token: xbd3o4.u08srxq4bx1vfspa
	I0110 02:41:59.965879  196335 out.go:252]   - Configuring RBAC rules ...
	I0110 02:41:59.966016  196335 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 02:41:59.971447  196335 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 02:41:59.979388  196335 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 02:41:59.984271  196335 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 02:41:59.988353  196335 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 02:41:59.991909  196335 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 02:42:00.014737  196335 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 02:42:00.530132  196335 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 02:42:00.569616  196335 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 02:42:00.571094  196335 kubeadm.go:319] 
	I0110 02:42:00.571166  196335 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 02:42:00.571172  196335 kubeadm.go:319] 
	I0110 02:42:00.571249  196335 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 02:42:00.571260  196335 kubeadm.go:319] 
	I0110 02:42:00.571285  196335 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 02:42:00.571775  196335 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 02:42:00.571861  196335 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 02:42:00.571867  196335 kubeadm.go:319] 
	I0110 02:42:00.571921  196335 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 02:42:00.571925  196335 kubeadm.go:319] 
	I0110 02:42:00.571973  196335 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 02:42:00.571976  196335 kubeadm.go:319] 
	I0110 02:42:00.572028  196335 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 02:42:00.572104  196335 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 02:42:00.572172  196335 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 02:42:00.572176  196335 kubeadm.go:319] 
	I0110 02:42:00.572480  196335 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 02:42:00.572563  196335 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 02:42:00.572567  196335 kubeadm.go:319] 
	I0110 02:42:00.572868  196335 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xbd3o4.u08srxq4bx1vfspa \
	I0110 02:42:00.572976  196335 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e531c0ad449fbe8263f118458e677222d5af5abf33ad8c77cbc1152855f23f5 \
	I0110 02:42:00.573179  196335 kubeadm.go:319] 	--control-plane 
	I0110 02:42:00.573188  196335 kubeadm.go:319] 
	I0110 02:42:00.573483  196335 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 02:42:00.573491  196335 kubeadm.go:319] 
	I0110 02:42:00.573770  196335 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xbd3o4.u08srxq4bx1vfspa \
	I0110 02:42:00.574099  196335 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e531c0ad449fbe8263f118458e677222d5af5abf33ad8c77cbc1152855f23f5 
	I0110 02:42:00.579323  196335 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:42:00.579453  196335 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:42:00.579525  196335 cni.go:84] Creating CNI manager for ""
	I0110 02:42:00.579553  196335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:42:00.581145  196335 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0110 02:42:00.582410  196335 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 02:42:00.587877  196335 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I0110 02:42:00.587897  196335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 02:42:00.614109  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 02:42:01.514018  196335 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 02:42:01.514108  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:01.514224  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-736081 minikube.k8s.io/updated_at=2026_01_10T02_42_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510 minikube.k8s.io/name=old-k8s-version-736081 minikube.k8s.io/primary=true
	I0110 02:42:01.608631  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:01.735084  196335 ops.go:34] apiserver oom_adj: -16
	I0110 02:42:02.108754  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:02.609051  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:03.108697  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:03.609605  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:04.108732  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:04.609089  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:05.108707  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:05.609655  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:06.108879  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:06.609658  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:07.109060  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:07.609668  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:08.109481  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:08.608658  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:09.108733  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:09.608711  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:10.108758  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:10.608992  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:11.109016  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:11.608810  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:12.109643  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:12.609604  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:13.109123  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:13.609060  196335 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:42:13.791194  196335 kubeadm.go:1114] duration metric: took 12.277155147s to wait for elevateKubeSystemPrivileges
	I0110 02:42:13.791226  196335 kubeadm.go:403] duration metric: took 32.920372094s to StartCluster
	I0110 02:42:13.791243  196335 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:42:13.791304  196335 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:42:13.791985  196335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:42:13.792189  196335 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:42:13.792299  196335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 02:42:13.792552  196335 config.go:182] Loaded profile config "old-k8s-version-736081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 02:42:13.792596  196335 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:42:13.792675  196335 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-736081"
	I0110 02:42:13.792692  196335 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-736081"
	I0110 02:42:13.792712  196335 host.go:66] Checking if "old-k8s-version-736081" exists ...
	I0110 02:42:13.793160  196335 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Status}}
	I0110 02:42:13.793658  196335 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-736081"
	I0110 02:42:13.793681  196335 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-736081"
	I0110 02:42:13.793949  196335 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Status}}
	I0110 02:42:13.794335  196335 out.go:179] * Verifying Kubernetes components...
	I0110 02:42:13.796704  196335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:42:13.828865  196335 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-736081"
	I0110 02:42:13.828901  196335 host.go:66] Checking if "old-k8s-version-736081" exists ...
	I0110 02:42:13.829318  196335 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Status}}
	I0110 02:42:13.830564  196335 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:42:13.832756  196335 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:42:13.832775  196335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:42:13.832830  196335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:42:13.868059  196335 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:42:13.868089  196335 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:42:13.868148  196335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:42:13.873223  196335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:42:13.908980  196335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:42:14.196717  196335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:42:14.237007  196335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:42:14.237287  196335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 02:42:14.298922  196335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:42:15.385631  196335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.188880864s)
	I0110 02:42:15.385695  196335 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.148365679s)
	I0110 02:42:15.385720  196335 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0110 02:42:15.386722  196335 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.149547159s)
	I0110 02:42:15.387318  196335 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-736081" to be "Ready" ...
	I0110 02:42:15.387544  196335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.088597915s)
	I0110 02:42:15.472122  196335 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0110 02:42:15.475006  196335 addons.go:530] duration metric: took 1.682403676s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0110 02:42:15.891428  196335 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-736081" context rescaled to 1 replicas
	W0110 02:42:17.390524  196335 node_ready.go:57] node "old-k8s-version-736081" has "Ready":"False" status (will retry)
	W0110 02:42:19.391056  196335 node_ready.go:57] node "old-k8s-version-736081" has "Ready":"False" status (will retry)
	W0110 02:42:21.890267  196335 node_ready.go:57] node "old-k8s-version-736081" has "Ready":"False" status (will retry)
	W0110 02:42:23.890487  196335 node_ready.go:57] node "old-k8s-version-736081" has "Ready":"False" status (will retry)
	W0110 02:42:25.891465  196335 node_ready.go:57] node "old-k8s-version-736081" has "Ready":"False" status (will retry)
	I0110 02:42:27.890207  196335 node_ready.go:49] node "old-k8s-version-736081" is "Ready"
	I0110 02:42:27.890233  196335 node_ready.go:38] duration metric: took 12.50290042s for node "old-k8s-version-736081" to be "Ready" ...
	I0110 02:42:27.890246  196335 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:42:27.890306  196335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:42:27.904235  196335 api_server.go:72] duration metric: took 14.112012118s to wait for apiserver process to appear ...
	I0110 02:42:27.904257  196335 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:42:27.904276  196335 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:42:27.913807  196335 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:42:27.916624  196335 api_server.go:141] control plane version: v1.28.0
	I0110 02:42:27.916705  196335 api_server.go:131] duration metric: took 12.431898ms to wait for apiserver health ...
	I0110 02:42:27.916731  196335 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:42:27.928472  196335 system_pods.go:59] 8 kube-system pods found
	I0110 02:42:27.928554  196335 system_pods.go:61] "coredns-5dd5756b68-5nbj4" [eef91741-4e4f-4500-8bfb-fc6218330aa6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:42:27.928575  196335 system_pods.go:61] "etcd-old-k8s-version-736081" [98f5a6d4-ccda-4c98-8e24-94a1c2e4ddd0] Running
	I0110 02:42:27.928612  196335 system_pods.go:61] "kindnet-gx95x" [60c01285-ba95-4583-a0c0-b55ef4afab1f] Running
	I0110 02:42:27.928635  196335 system_pods.go:61] "kube-apiserver-old-k8s-version-736081" [ff94f7cb-219a-422e-af5a-93d1e5995c4f] Running
	I0110 02:42:27.928656  196335 system_pods.go:61] "kube-controller-manager-old-k8s-version-736081" [21c33917-1238-4581-bb07-ec7199814749] Running
	I0110 02:42:27.928677  196335 system_pods.go:61] "kube-proxy-kngxj" [121d4fcc-5f3d-4c31-9cec-a560afde1be1] Running
	I0110 02:42:27.928710  196335 system_pods.go:61] "kube-scheduler-old-k8s-version-736081" [f644c304-2337-4ddf-a914-02772d8ad2ed] Running
	I0110 02:42:27.928733  196335 system_pods.go:61] "storage-provisioner" [cc657c06-3c7c-4be5-849e-1b627d06ed63] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:42:27.928752  196335 system_pods.go:74] duration metric: took 11.987028ms to wait for pod list to return data ...
	I0110 02:42:27.928774  196335 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:42:27.936634  196335 default_sa.go:45] found service account: "default"
	I0110 02:42:27.936699  196335 default_sa.go:55] duration metric: took 7.905517ms for default service account to be created ...
	I0110 02:42:27.936724  196335 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:42:27.967071  196335 system_pods.go:86] 8 kube-system pods found
	I0110 02:42:27.967159  196335 system_pods.go:89] "coredns-5dd5756b68-5nbj4" [eef91741-4e4f-4500-8bfb-fc6218330aa6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:42:27.967182  196335 system_pods.go:89] "etcd-old-k8s-version-736081" [98f5a6d4-ccda-4c98-8e24-94a1c2e4ddd0] Running
	I0110 02:42:27.967219  196335 system_pods.go:89] "kindnet-gx95x" [60c01285-ba95-4583-a0c0-b55ef4afab1f] Running
	I0110 02:42:27.967246  196335 system_pods.go:89] "kube-apiserver-old-k8s-version-736081" [ff94f7cb-219a-422e-af5a-93d1e5995c4f] Running
	I0110 02:42:27.967269  196335 system_pods.go:89] "kube-controller-manager-old-k8s-version-736081" [21c33917-1238-4581-bb07-ec7199814749] Running
	I0110 02:42:27.967293  196335 system_pods.go:89] "kube-proxy-kngxj" [121d4fcc-5f3d-4c31-9cec-a560afde1be1] Running
	I0110 02:42:27.967326  196335 system_pods.go:89] "kube-scheduler-old-k8s-version-736081" [f644c304-2337-4ddf-a914-02772d8ad2ed] Running
	I0110 02:42:27.967352  196335 system_pods.go:89] "storage-provisioner" [cc657c06-3c7c-4be5-849e-1b627d06ed63] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:42:27.967376  196335 system_pods.go:126] duration metric: took 30.63288ms to wait for k8s-apps to be running ...
	I0110 02:42:27.967398  196335 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:42:27.967477  196335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:42:27.984278  196335 system_svc.go:56] duration metric: took 16.871059ms WaitForService to wait for kubelet
	I0110 02:42:27.984348  196335 kubeadm.go:587] duration metric: took 14.192127754s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:42:27.984383  196335 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:42:27.992332  196335 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:42:27.992405  196335 node_conditions.go:123] node cpu capacity is 2
	I0110 02:42:27.992435  196335 node_conditions.go:105] duration metric: took 8.031504ms to run NodePressure ...
	I0110 02:42:27.992463  196335 start.go:242] waiting for startup goroutines ...
	I0110 02:42:27.992503  196335 start.go:247] waiting for cluster config update ...
	I0110 02:42:27.992529  196335 start.go:256] writing updated cluster config ...
	I0110 02:42:27.992826  196335 ssh_runner.go:195] Run: rm -f paused
	I0110 02:42:27.997825  196335 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:42:28.007116  196335 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5nbj4" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:42:29.014130  196335 pod_ready.go:94] pod "coredns-5dd5756b68-5nbj4" is "Ready"
	I0110 02:42:29.014161  196335 pod_ready.go:86] duration metric: took 1.006965575s for pod "coredns-5dd5756b68-5nbj4" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:42:29.017376  196335 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:42:29.022527  196335 pod_ready.go:94] pod "etcd-old-k8s-version-736081" is "Ready"
	I0110 02:42:29.022553  196335 pod_ready.go:86] duration metric: took 5.152389ms for pod "etcd-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:42:29.025628  196335 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:42:29.030359  196335 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-736081" is "Ready"
	I0110 02:42:29.030421  196335 pod_ready.go:86] duration metric: took 4.76835ms for pod "kube-apiserver-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:42:29.033331  196335 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:42:29.211082  196335 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-736081" is "Ready"
	I0110 02:42:29.211108  196335 pod_ready.go:86] duration metric: took 177.755073ms for pod "kube-controller-manager-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:42:29.411884  196335 pod_ready.go:83] waiting for pod "kube-proxy-kngxj" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:42:29.811056  196335 pod_ready.go:94] pod "kube-proxy-kngxj" is "Ready"
	I0110 02:42:29.811081  196335 pod_ready.go:86] duration metric: took 399.17088ms for pod "kube-proxy-kngxj" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:42:30.014183  196335 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:42:30.411624  196335 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-736081" is "Ready"
	I0110 02:42:30.411651  196335 pod_ready.go:86] duration metric: took 397.44458ms for pod "kube-scheduler-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:42:30.411665  196335 pod_ready.go:40] duration metric: took 2.413771032s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:42:30.480178  196335 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I0110 02:42:30.483100  196335 out.go:203] 
	W0110 02:42:30.485879  196335 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I0110 02:42:30.488758  196335 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:42:30.494190  196335 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-736081" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:42:27 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:27.862956871Z" level=info msg="Created container 12c707c04c52b0636cf74bf54123a07ccaab7f7db944a7d805877e1a661cd602: kube-system/coredns-5dd5756b68-5nbj4/coredns" id=53b7824e-d726-4e35-8722-5ffaee700c91 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:42:27 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:27.863957598Z" level=info msg="Starting container: 12c707c04c52b0636cf74bf54123a07ccaab7f7db944a7d805877e1a661cd602" id=c289ad94-6986-4523-afa4-61903ddbe70e name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:42:27 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:27.865670855Z" level=info msg="Started container" PID=1976 containerID=12c707c04c52b0636cf74bf54123a07ccaab7f7db944a7d805877e1a661cd602 description=kube-system/coredns-5dd5756b68-5nbj4/coredns id=c289ad94-6986-4523-afa4-61903ddbe70e name=/runtime.v1.RuntimeService/StartContainer sandboxID=ddbd571b15d2a975f7f8db40aaa2279b151dacce3fefad1f6661f34c475b04b1
	Jan 10 02:42:31 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:31.00010385Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3cc6e931-3062-4c55-9c48-67d77007fe29 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:42:31 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:31.000186112Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:42:31 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:31.010057688Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b7ba42dbe3393c38b859998b2f3e0c6dbfee69b419a50400759b42c3aa6029bc UID:0e3dec6c-049f-485c-ac7f-6e44f1f434bb NetNS:/var/run/netns/56d3c8b7-bffd-4b0b-9d74-27ecbc19ac2f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004c99f8}] Aliases:map[]}"
	Jan 10 02:42:31 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:31.01011349Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 10 02:42:31 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:31.025302537Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:b7ba42dbe3393c38b859998b2f3e0c6dbfee69b419a50400759b42c3aa6029bc UID:0e3dec6c-049f-485c-ac7f-6e44f1f434bb NetNS:/var/run/netns/56d3c8b7-bffd-4b0b-9d74-27ecbc19ac2f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004c99f8}] Aliases:map[]}"
	Jan 10 02:42:31 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:31.025448855Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 10 02:42:31 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:31.030833459Z" level=info msg="Ran pod sandbox b7ba42dbe3393c38b859998b2f3e0c6dbfee69b419a50400759b42c3aa6029bc with infra container: default/busybox/POD" id=3cc6e931-3062-4c55-9c48-67d77007fe29 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:42:31 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:31.032301774Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2009cb5d-111c-442a-b77b-e6c52755a4fe name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:42:31 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:31.032435933Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2009cb5d-111c-442a-b77b-e6c52755a4fe name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:42:31 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:31.03251529Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=2009cb5d-111c-442a-b77b-e6c52755a4fe name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:42:31 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:31.03333231Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0da49951-eb07-46fa-b8a0-7efd4b3827a3 name=/runtime.v1.ImageService/PullImage
	Jan 10 02:42:31 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:31.033732331Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 10 02:42:33 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:33.12355247Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=0da49951-eb07-46fa-b8a0-7efd4b3827a3 name=/runtime.v1.ImageService/PullImage
	Jan 10 02:42:33 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:33.124397599Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f8374db2-d2ec-4eef-bc49-73856fa6b3ed name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:42:33 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:33.125691175Z" level=info msg="Creating container: default/busybox/busybox" id=f008b380-44c3-4e53-adc6-9db95d2bb907 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:42:33 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:33.125802138Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:42:33 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:33.13063991Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:42:33 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:33.13110969Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:42:33 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:33.145058149Z" level=info msg="Created container 7c2c1b20b64bb8fc82b73ba7d8b7f21717dcf28177369b20ae429a638d504a85: default/busybox/busybox" id=f008b380-44c3-4e53-adc6-9db95d2bb907 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:42:33 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:33.145952852Z" level=info msg="Starting container: 7c2c1b20b64bb8fc82b73ba7d8b7f21717dcf28177369b20ae429a638d504a85" id=db6db6a7-3907-4c08-b07f-4145fb9c7a88 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:42:33 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:33.149481811Z" level=info msg="Started container" PID=2037 containerID=7c2c1b20b64bb8fc82b73ba7d8b7f21717dcf28177369b20ae429a638d504a85 description=default/busybox/busybox id=db6db6a7-3907-4c08-b07f-4145fb9c7a88 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b7ba42dbe3393c38b859998b2f3e0c6dbfee69b419a50400759b42c3aa6029bc
	Jan 10 02:42:39 old-k8s-version-736081 crio[839]: time="2026-01-10T02:42:39.875179918Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	7c2c1b20b64bb       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   b7ba42dbe3393       busybox                                          default
	12c707c04c52b       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   ddbd571b15d2a       coredns-5dd5756b68-5nbj4                         kube-system
	3bd3367886d69       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   2da3ade2ffeb5       storage-provisioner                              kube-system
	91f3c5a564635       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    24 seconds ago      Running             kindnet-cni               0                   d103dfeb66653       kindnet-gx95x                                    kube-system
	fed1866968a13       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   d5610ff0d6fbf       kube-proxy-kngxj                                 kube-system
	9bcf0d4134294       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   1de5656baddcd       kube-controller-manager-old-k8s-version-736081   kube-system
	9b036541f6d7c       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   bb2407922ae1d       kube-apiserver-old-k8s-version-736081            kube-system
	88377afcfc38d       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   36b883624c1d5       etcd-old-k8s-version-736081                      kube-system
	1b6a9930b1b7f       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   85577cff62891       kube-scheduler-old-k8s-version-736081            kube-system
	
	
	==> coredns [12c707c04c52b0636cf74bf54123a07ccaab7f7db944a7d805877e1a661cd602] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42289 - 10576 "HINFO IN 7771200629659690086.8489758143828737737. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011809531s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-736081
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-736081
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=old-k8s-version-736081
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_42_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:41:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-736081
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:42:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:42:31 +0000   Sat, 10 Jan 2026 02:41:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:42:31 +0000   Sat, 10 Jan 2026 02:41:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:42:31 +0000   Sat, 10 Jan 2026 02:41:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:42:31 +0000   Sat, 10 Jan 2026 02:42:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-736081
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                35697ab1-6362-43a3-ac4e-c38faa5c6e6d
	  Boot ID:                    41f52236-76b9-4dc8-bfe3-121fbdeb9659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-5nbj4                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-736081                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-gx95x                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-736081             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-736081    200m (10%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-kngxj                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-736081             100m (5%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 49s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node old-k8s-version-736081 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node old-k8s-version-736081 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node old-k8s-version-736081 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-736081 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-736081 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-736081 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-736081 event: Registered Node old-k8s-version-736081 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-736081 status is now: NodeReady
	
	
	==> dmesg <==
	[  +3.770589] overlayfs: idmapped layers are currently not supported
	[ +39.223525] overlayfs: idmapped layers are currently not supported
	[Jan10 02:09] overlayfs: idmapped layers are currently not supported
	[Jan10 02:10] overlayfs: idmapped layers are currently not supported
	[Jan10 02:14] overlayfs: idmapped layers are currently not supported
	[ +27.765975] overlayfs: idmapped layers are currently not supported
	[Jan10 02:15] overlayfs: idmapped layers are currently not supported
	[Jan10 02:16] overlayfs: idmapped layers are currently not supported
	[Jan10 02:17] overlayfs: idmapped layers are currently not supported
	[Jan10 02:18] overlayfs: idmapped layers are currently not supported
	[ +23.212409] overlayfs: idmapped layers are currently not supported
	[  +9.866169] overlayfs: idmapped layers are currently not supported
	[Jan10 02:19] overlayfs: idmapped layers are currently not supported
	[Jan10 02:20] overlayfs: idmapped layers are currently not supported
	[ +35.960240] overlayfs: idmapped layers are currently not supported
	[Jan10 02:21] overlayfs: idmapped layers are currently not supported
	[Jan10 02:23] overlayfs: idmapped layers are currently not supported
	[Jan10 02:24] overlayfs: idmapped layers are currently not supported
	[Jan10 02:25] overlayfs: idmapped layers are currently not supported
	[Jan10 02:30] overlayfs: idmapped layers are currently not supported
	[Jan10 02:31] overlayfs: idmapped layers are currently not supported
	[Jan10 02:35] overlayfs: idmapped layers are currently not supported
	[Jan10 02:37] overlayfs: idmapped layers are currently not supported
	[Jan10 02:41] overlayfs: idmapped layers are currently not supported
	[ +37.534021] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [88377afcfc38df94d00f2c95a1dec9afa91ed38eaf85f5c3c20e02d825b47e81] <==
	{"level":"info","ts":"2026-01-10T02:41:53.371123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T02:41:53.371459Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2026-01-10T02:41:53.376Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-10T02:41:53.376196Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T02:41:53.376228Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T02:41:53.376312Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:41:53.376327Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:41:53.755829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T02:41:53.755946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T02:41:53.755999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2026-01-10T02:41:53.75604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:41:53.756075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:41:53.756112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2026-01-10T02:41:53.756145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:41:53.75995Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-736081 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:41:53.760035Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:41:53.761133Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T02:41:53.760097Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T02:41:53.762681Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:41:53.760115Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:41:53.763985Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:41:53.76787Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:41:53.768275Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T02:41:53.76841Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T02:41:53.768462Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 02:42:41 up  1:25,  0 user,  load average: 2.77, 1.76, 1.79
	Linux old-k8s-version-736081 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [91f3c5a564635839b96318c0bd33cb867e48c3f20807b957b1da6b172f37f774] <==
	I0110 02:42:17.027972       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:42:17.028314       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 02:42:17.028441       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:42:17.028458       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:42:17.028471       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:42:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:42:17.230637       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:42:17.231714       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:42:17.231738       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:42:17.231880       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:42:17.432297       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:42:17.523866       1 metrics.go:72] Registering metrics
	I0110 02:42:17.524013       1 controller.go:711] "Syncing nftables rules"
	I0110 02:42:27.236819       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:42:27.236875       1 main.go:301] handling current node
	I0110 02:42:37.231883       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:42:37.231917       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9b036541f6d7c170b758af12389da67f54ed3225bef630574a4ea575ecab6197] <==
	I0110 02:41:57.337976       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0110 02:41:57.338083       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0110 02:41:57.340858       1 aggregator.go:166] initial CRD sync complete...
	I0110 02:41:57.340963       1 autoregister_controller.go:141] Starting autoregister controller
	I0110 02:41:57.340994       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:41:57.341029       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:41:57.369056       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:41:57.376305       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0110 02:41:57.385211       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0110 02:41:57.387831       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0110 02:41:57.989298       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0110 02:41:57.993848       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0110 02:41:57.993880       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0110 02:41:58.480320       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:41:58.521528       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:41:58.609234       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 02:41:58.619231       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0110 02:41:58.620828       1 controller.go:624] quota admission added evaluator for: endpoints
	I0110 02:41:58.627833       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:41:59.253950       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0110 02:42:00.516996       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0110 02:42:00.528873       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 02:42:00.547275       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0110 02:42:13.223320       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0110 02:42:13.270845       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [9bcf0d413429480fe6c17e7c1f29e01c6796e48c6169d8a8f40f0c9315f5cea6] <==
	I0110 02:42:12.876957       1 shared_informer.go:318] Caches are synced for stateful set
	I0110 02:42:12.908490       1 shared_informer.go:318] Caches are synced for disruption
	I0110 02:42:13.241197       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kngxj"
	I0110 02:42:13.244770       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gx95x"
	I0110 02:42:13.276852       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0110 02:42:13.307920       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 02:42:13.307950       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0110 02:42:13.352114       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 02:42:13.775514       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-rtfkk"
	I0110 02:42:13.781636       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-5nbj4"
	I0110 02:42:13.815249       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="538.128924ms"
	I0110 02:42:13.957753       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="142.458779ms"
	I0110 02:42:13.957834       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="39.835µs"
	I0110 02:42:13.958167       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.466µs"
	I0110 02:42:15.482142       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0110 02:42:15.536271       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-rtfkk"
	I0110 02:42:15.546900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.092677ms"
	I0110 02:42:15.567638       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.69349ms"
	I0110 02:42:15.567738       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.501µs"
	I0110 02:42:27.466202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.187µs"
	I0110 02:42:27.506956       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.013µs"
	I0110 02:42:27.711908       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0110 02:42:27.952837       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.356µs"
	I0110 02:42:28.962443       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.184533ms"
	I0110 02:42:28.962670       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.698µs"
	
	
	==> kube-proxy [fed1866968a1382fc09f890602446270aaa58934a3f9bceecd6fd9c4c3b0542c] <==
	I0110 02:42:13.784106       1 server_others.go:69] "Using iptables proxy"
	I0110 02:42:14.036655       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0110 02:42:14.103582       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:42:14.105865       1 server_others.go:152] "Using iptables Proxier"
	I0110 02:42:14.105902       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0110 02:42:14.105910       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0110 02:42:14.105944       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0110 02:42:14.106165       1 server.go:846] "Version info" version="v1.28.0"
	I0110 02:42:14.106176       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:42:14.107103       1 config.go:188] "Starting service config controller"
	I0110 02:42:14.107121       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0110 02:42:14.107137       1 config.go:97] "Starting endpoint slice config controller"
	I0110 02:42:14.107140       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0110 02:42:14.107611       1 config.go:315] "Starting node config controller"
	I0110 02:42:14.107619       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0110 02:42:14.207186       1 shared_informer.go:318] Caches are synced for service config
	I0110 02:42:14.207273       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0110 02:42:14.208046       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [1b6a9930b1b7f3afe13fe15fc9a52062ae95e89f0500909afd4aac034c4ea1fa] <==
	W0110 02:41:57.324914       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0110 02:41:57.324952       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0110 02:41:57.324960       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0110 02:41:57.325069       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0110 02:41:57.325091       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0110 02:41:57.325151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0110 02:41:57.325053       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0110 02:41:57.325242       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0110 02:41:57.325263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0110 02:41:57.325239       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0110 02:41:57.325204       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0110 02:41:57.325287       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0110 02:41:57.325134       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0110 02:41:57.325311       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0110 02:41:57.325021       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0110 02:41:57.325325       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0110 02:41:57.329280       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0110 02:41:57.329316       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0110 02:41:58.186273       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0110 02:41:58.186406       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0110 02:41:58.260327       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0110 02:41:58.260423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0110 02:41:58.260522       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0110 02:41:58.260562       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0110 02:41:58.814956       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 10 02:42:13 old-k8s-version-736081 kubelet[1405]: I0110 02:42:13.267449    1405 topology_manager.go:215] "Topology Admit Handler" podUID="60c01285-ba95-4583-a0c0-b55ef4afab1f" podNamespace="kube-system" podName="kindnet-gx95x"
	Jan 10 02:42:13 old-k8s-version-736081 kubelet[1405]: I0110 02:42:13.440575    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60c01285-ba95-4583-a0c0-b55ef4afab1f-xtables-lock\") pod \"kindnet-gx95x\" (UID: \"60c01285-ba95-4583-a0c0-b55ef4afab1f\") " pod="kube-system/kindnet-gx95x"
	Jan 10 02:42:13 old-k8s-version-736081 kubelet[1405]: I0110 02:42:13.440632    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60c01285-ba95-4583-a0c0-b55ef4afab1f-lib-modules\") pod \"kindnet-gx95x\" (UID: \"60c01285-ba95-4583-a0c0-b55ef4afab1f\") " pod="kube-system/kindnet-gx95x"
	Jan 10 02:42:13 old-k8s-version-736081 kubelet[1405]: I0110 02:42:13.440674    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/121d4fcc-5f3d-4c31-9cec-a560afde1be1-kube-proxy\") pod \"kube-proxy-kngxj\" (UID: \"121d4fcc-5f3d-4c31-9cec-a560afde1be1\") " pod="kube-system/kube-proxy-kngxj"
	Jan 10 02:42:13 old-k8s-version-736081 kubelet[1405]: I0110 02:42:13.440698    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/121d4fcc-5f3d-4c31-9cec-a560afde1be1-lib-modules\") pod \"kube-proxy-kngxj\" (UID: \"121d4fcc-5f3d-4c31-9cec-a560afde1be1\") " pod="kube-system/kube-proxy-kngxj"
	Jan 10 02:42:13 old-k8s-version-736081 kubelet[1405]: I0110 02:42:13.440735    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz7qc\" (UniqueName: \"kubernetes.io/projected/121d4fcc-5f3d-4c31-9cec-a560afde1be1-kube-api-access-kz7qc\") pod \"kube-proxy-kngxj\" (UID: \"121d4fcc-5f3d-4c31-9cec-a560afde1be1\") " pod="kube-system/kube-proxy-kngxj"
	Jan 10 02:42:13 old-k8s-version-736081 kubelet[1405]: I0110 02:42:13.440774    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/60c01285-ba95-4583-a0c0-b55ef4afab1f-cni-cfg\") pod \"kindnet-gx95x\" (UID: \"60c01285-ba95-4583-a0c0-b55ef4afab1f\") " pod="kube-system/kindnet-gx95x"
	Jan 10 02:42:13 old-k8s-version-736081 kubelet[1405]: I0110 02:42:13.440815    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn962\" (UniqueName: \"kubernetes.io/projected/60c01285-ba95-4583-a0c0-b55ef4afab1f-kube-api-access-dn962\") pod \"kindnet-gx95x\" (UID: \"60c01285-ba95-4583-a0c0-b55ef4afab1f\") " pod="kube-system/kindnet-gx95x"
	Jan 10 02:42:13 old-k8s-version-736081 kubelet[1405]: I0110 02:42:13.440840    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/121d4fcc-5f3d-4c31-9cec-a560afde1be1-xtables-lock\") pod \"kube-proxy-kngxj\" (UID: \"121d4fcc-5f3d-4c31-9cec-a560afde1be1\") " pod="kube-system/kube-proxy-kngxj"
	Jan 10 02:42:13 old-k8s-version-736081 kubelet[1405]: W0110 02:42:13.588611    1405 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd/crio-d103dfeb66653cb16652cde697c88145fcac169a930c2f2d45482a6f918daa86 WatchSource:0}: Error finding container d103dfeb66653cb16652cde697c88145fcac169a930c2f2d45482a6f918daa86: Status 404 returned error can't find the container with id d103dfeb66653cb16652cde697c88145fcac169a930c2f2d45482a6f918daa86
	Jan 10 02:42:17 old-k8s-version-736081 kubelet[1405]: I0110 02:42:17.913066    1405 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kngxj" podStartSLOduration=4.9130220829999995 podCreationTimestamp="2026-01-10 02:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:42:13.939820552 +0000 UTC m=+13.453127258" watchObservedRunningTime="2026-01-10 02:42:17.913022083 +0000 UTC m=+17.426328781"
	Jan 10 02:42:27 old-k8s-version-736081 kubelet[1405]: I0110 02:42:27.423493    1405 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 10 02:42:27 old-k8s-version-736081 kubelet[1405]: I0110 02:42:27.462479    1405 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-gx95x" podStartSLOduration=11.157556252 podCreationTimestamp="2026-01-10 02:42:13 +0000 UTC" firstStartedPulling="2026-01-10 02:42:13.596018389 +0000 UTC m=+13.109325079" lastFinishedPulling="2026-01-10 02:42:16.900863301 +0000 UTC m=+16.414169991" observedRunningTime="2026-01-10 02:42:17.914991693 +0000 UTC m=+17.428298399" watchObservedRunningTime="2026-01-10 02:42:27.462401164 +0000 UTC m=+26.975707863"
	Jan 10 02:42:27 old-k8s-version-736081 kubelet[1405]: I0110 02:42:27.463035    1405 topology_manager.go:215] "Topology Admit Handler" podUID="cc657c06-3c7c-4be5-849e-1b627d06ed63" podNamespace="kube-system" podName="storage-provisioner"
	Jan 10 02:42:27 old-k8s-version-736081 kubelet[1405]: I0110 02:42:27.464653    1405 topology_manager.go:215] "Topology Admit Handler" podUID="eef91741-4e4f-4500-8bfb-fc6218330aa6" podNamespace="kube-system" podName="coredns-5dd5756b68-5nbj4"
	Jan 10 02:42:27 old-k8s-version-736081 kubelet[1405]: I0110 02:42:27.572190    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57djb\" (UniqueName: \"kubernetes.io/projected/eef91741-4e4f-4500-8bfb-fc6218330aa6-kube-api-access-57djb\") pod \"coredns-5dd5756b68-5nbj4\" (UID: \"eef91741-4e4f-4500-8bfb-fc6218330aa6\") " pod="kube-system/coredns-5dd5756b68-5nbj4"
	Jan 10 02:42:27 old-k8s-version-736081 kubelet[1405]: I0110 02:42:27.572254    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cc657c06-3c7c-4be5-849e-1b627d06ed63-tmp\") pod \"storage-provisioner\" (UID: \"cc657c06-3c7c-4be5-849e-1b627d06ed63\") " pod="kube-system/storage-provisioner"
	Jan 10 02:42:27 old-k8s-version-736081 kubelet[1405]: I0110 02:42:27.572287    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssg6k\" (UniqueName: \"kubernetes.io/projected/cc657c06-3c7c-4be5-849e-1b627d06ed63-kube-api-access-ssg6k\") pod \"storage-provisioner\" (UID: \"cc657c06-3c7c-4be5-849e-1b627d06ed63\") " pod="kube-system/storage-provisioner"
	Jan 10 02:42:27 old-k8s-version-736081 kubelet[1405]: I0110 02:42:27.572317    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eef91741-4e4f-4500-8bfb-fc6218330aa6-config-volume\") pod \"coredns-5dd5756b68-5nbj4\" (UID: \"eef91741-4e4f-4500-8bfb-fc6218330aa6\") " pod="kube-system/coredns-5dd5756b68-5nbj4"
	Jan 10 02:42:27 old-k8s-version-736081 kubelet[1405]: W0110 02:42:27.817134    1405 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd/crio-ddbd571b15d2a975f7f8db40aaa2279b151dacce3fefad1f6661f34c475b04b1 WatchSource:0}: Error finding container ddbd571b15d2a975f7f8db40aaa2279b151dacce3fefad1f6661f34c475b04b1: Status 404 returned error can't find the container with id ddbd571b15d2a975f7f8db40aaa2279b151dacce3fefad1f6661f34c475b04b1
	Jan 10 02:42:27 old-k8s-version-736081 kubelet[1405]: I0110 02:42:27.946988    1405 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-5nbj4" podStartSLOduration=14.946948304 podCreationTimestamp="2026-01-10 02:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:42:27.941750715 +0000 UTC m=+27.455057413" watchObservedRunningTime="2026-01-10 02:42:27.946948304 +0000 UTC m=+27.460254994"
	Jan 10 02:42:28 old-k8s-version-736081 kubelet[1405]: I0110 02:42:28.940485    1405 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.940431049 podCreationTimestamp="2026-01-10 02:42:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:42:27.999274476 +0000 UTC m=+27.512581174" watchObservedRunningTime="2026-01-10 02:42:28.940431049 +0000 UTC m=+28.453737747"
	Jan 10 02:42:30 old-k8s-version-736081 kubelet[1405]: I0110 02:42:30.698029    1405 topology_manager.go:215] "Topology Admit Handler" podUID="0e3dec6c-049f-485c-ac7f-6e44f1f434bb" podNamespace="default" podName="busybox"
	Jan 10 02:42:30 old-k8s-version-736081 kubelet[1405]: I0110 02:42:30.796010    1405 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4bdj\" (UniqueName: \"kubernetes.io/projected/0e3dec6c-049f-485c-ac7f-6e44f1f434bb-kube-api-access-d4bdj\") pod \"busybox\" (UID: \"0e3dec6c-049f-485c-ac7f-6e44f1f434bb\") " pod="default/busybox"
	Jan 10 02:42:31 old-k8s-version-736081 kubelet[1405]: W0110 02:42:31.030091    1405 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd/crio-b7ba42dbe3393c38b859998b2f3e0c6dbfee69b419a50400759b42c3aa6029bc WatchSource:0}: Error finding container b7ba42dbe3393c38b859998b2f3e0c6dbfee69b419a50400759b42c3aa6029bc: Status 404 returned error can't find the container with id b7ba42dbe3393c38b859998b2f3e0c6dbfee69b419a50400759b42c3aa6029bc
	
	
	==> storage-provisioner [3bd3367886d69371d98cc587bef1816641030572d9c584a8efed28afcaf6ebdc] <==
	I0110 02:42:27.842283       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:42:27.858772       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:42:27.858815       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0110 02:42:27.876948       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:42:27.878883       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-736081_04f14f6e-c79c-47f3-97c7-94d9d18e2b7b!
	I0110 02:42:27.881335       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd39af43-26bc-44a5-a4ee-75ffe0c111d7", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-736081_04f14f6e-c79c-47f3-97c7-94d9d18e2b7b became leader
	I0110 02:42:27.980756       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-736081_04f14f6e-c79c-47f3-97c7-94d9d18e2b7b!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-736081 -n old-k8s-version-736081
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-736081 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-736081 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-736081 --alsologtostderr -v=1: exit status 80 (2.318590443s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-736081 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:44:01.670019  203291 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:44:01.670204  203291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:44:01.670232  203291 out.go:374] Setting ErrFile to fd 2...
	I0110 02:44:01.670252  203291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:44:01.670519  203291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:44:01.670787  203291 out.go:368] Setting JSON to false
	I0110 02:44:01.670834  203291 mustload.go:66] Loading cluster: old-k8s-version-736081
	I0110 02:44:01.671254  203291 config.go:182] Loaded profile config "old-k8s-version-736081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 02:44:01.671777  203291 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Status}}
	I0110 02:44:01.689365  203291 host.go:66] Checking if "old-k8s-version-736081" exists ...
	I0110 02:44:01.689721  203291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:44:01.762186  203291 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2026-01-10 02:44:01.752816893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:44:01.762822  203291 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22414/minikube-v1.37.0-1767924026-22414-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767924026-22414/minikube-v1.37.0-1767924026-22414-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767924026-22414-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:old-k8s-version-736081 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s
(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 02:44:01.766514  203291 out.go:179] * Pausing node old-k8s-version-736081 ... 
	I0110 02:44:01.770672  203291 host.go:66] Checking if "old-k8s-version-736081" exists ...
	I0110 02:44:01.771035  203291 ssh_runner.go:195] Run: systemctl --version
	I0110 02:44:01.771089  203291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:44:01.790631  203291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:44:01.894784  203291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:44:01.914210  203291 pause.go:52] kubelet running: true
	I0110 02:44:01.914311  203291 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:44:02.208269  203291 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:44:02.208384  203291 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:44:02.288064  203291 cri.go:96] found id: "e4bee80e037b45836e82e9804802d7c4dbc1107809a85cbfafa2e89816de854d"
	I0110 02:44:02.288087  203291 cri.go:96] found id: "2727f421ce1dca90f75e6f41a849e6e779f262f9dbb7b30e2f669a71e611aa3f"
	I0110 02:44:02.288092  203291 cri.go:96] found id: "aca38b976722762d5b41a4691f68a0f144f48e6c928381bbaff6d0b5a573fffb"
	I0110 02:44:02.288095  203291 cri.go:96] found id: "286e7b550e400fd0d858194af6e2d84faf1650cb6c86a08499e60624a568bee2"
	I0110 02:44:02.288098  203291 cri.go:96] found id: "262f20e22f472779a424dd355205ce5ab251514b5ac20a49c9e5b921a6c5371f"
	I0110 02:44:02.288102  203291 cri.go:96] found id: "6355391b1f72a7a845314325e3d914fc554a55e46c034d34f7a46d52b6942f35"
	I0110 02:44:02.288105  203291 cri.go:96] found id: "4d027e98ddd01a153e481f07a9974825140f36b8d3d38d0154f572cb3a1cd9d2"
	I0110 02:44:02.288108  203291 cri.go:96] found id: "b9d64a94931c8c5bff3ce0ba866efd0d474684e1a339c61f7118db8ef1a0bab6"
	I0110 02:44:02.288111  203291 cri.go:96] found id: "819d19fdf12b7024ad76194b4c06bf105258b02ae91cd1fbc68a9e4ad29511a7"
	I0110 02:44:02.288118  203291 cri.go:96] found id: "4a25e20e749b59fa968989a3556ab81d3a7f97cccf383c7c1210aae2311e4c20"
	I0110 02:44:02.288122  203291 cri.go:96] found id: "878284c8f494f2e5cb225e3fc8ecb321d081824e28f7239738ac7405f263ec31"
	I0110 02:44:02.288124  203291 cri.go:96] found id: ""
	I0110 02:44:02.288172  203291 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:44:02.299765  203291 retry.go:84] will retry after 100ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:44:02Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:44:02.438228  203291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:44:02.451189  203291 pause.go:52] kubelet running: false
	I0110 02:44:02.451266  203291 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:44:02.620521  203291 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:44:02.620598  203291 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:44:02.699136  203291 cri.go:96] found id: "e4bee80e037b45836e82e9804802d7c4dbc1107809a85cbfafa2e89816de854d"
	I0110 02:44:02.699163  203291 cri.go:96] found id: "2727f421ce1dca90f75e6f41a849e6e779f262f9dbb7b30e2f669a71e611aa3f"
	I0110 02:44:02.699169  203291 cri.go:96] found id: "aca38b976722762d5b41a4691f68a0f144f48e6c928381bbaff6d0b5a573fffb"
	I0110 02:44:02.699173  203291 cri.go:96] found id: "286e7b550e400fd0d858194af6e2d84faf1650cb6c86a08499e60624a568bee2"
	I0110 02:44:02.699176  203291 cri.go:96] found id: "262f20e22f472779a424dd355205ce5ab251514b5ac20a49c9e5b921a6c5371f"
	I0110 02:44:02.699180  203291 cri.go:96] found id: "6355391b1f72a7a845314325e3d914fc554a55e46c034d34f7a46d52b6942f35"
	I0110 02:44:02.699183  203291 cri.go:96] found id: "4d027e98ddd01a153e481f07a9974825140f36b8d3d38d0154f572cb3a1cd9d2"
	I0110 02:44:02.699186  203291 cri.go:96] found id: "b9d64a94931c8c5bff3ce0ba866efd0d474684e1a339c61f7118db8ef1a0bab6"
	I0110 02:44:02.699189  203291 cri.go:96] found id: "819d19fdf12b7024ad76194b4c06bf105258b02ae91cd1fbc68a9e4ad29511a7"
	I0110 02:44:02.699203  203291 cri.go:96] found id: "4a25e20e749b59fa968989a3556ab81d3a7f97cccf383c7c1210aae2311e4c20"
	I0110 02:44:02.699209  203291 cri.go:96] found id: "878284c8f494f2e5cb225e3fc8ecb321d081824e28f7239738ac7405f263ec31"
	I0110 02:44:02.699212  203291 cri.go:96] found id: ""
	I0110 02:44:02.699282  203291 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:44:02.905761  203291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:44:02.919072  203291 pause.go:52] kubelet running: false
	I0110 02:44:02.919136  203291 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:44:03.087426  203291 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:44:03.087537  203291 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:44:03.168455  203291 cri.go:96] found id: "e4bee80e037b45836e82e9804802d7c4dbc1107809a85cbfafa2e89816de854d"
	I0110 02:44:03.168478  203291 cri.go:96] found id: "2727f421ce1dca90f75e6f41a849e6e779f262f9dbb7b30e2f669a71e611aa3f"
	I0110 02:44:03.168484  203291 cri.go:96] found id: "aca38b976722762d5b41a4691f68a0f144f48e6c928381bbaff6d0b5a573fffb"
	I0110 02:44:03.168487  203291 cri.go:96] found id: "286e7b550e400fd0d858194af6e2d84faf1650cb6c86a08499e60624a568bee2"
	I0110 02:44:03.168490  203291 cri.go:96] found id: "262f20e22f472779a424dd355205ce5ab251514b5ac20a49c9e5b921a6c5371f"
	I0110 02:44:03.168494  203291 cri.go:96] found id: "6355391b1f72a7a845314325e3d914fc554a55e46c034d34f7a46d52b6942f35"
	I0110 02:44:03.168497  203291 cri.go:96] found id: "4d027e98ddd01a153e481f07a9974825140f36b8d3d38d0154f572cb3a1cd9d2"
	I0110 02:44:03.168500  203291 cri.go:96] found id: "b9d64a94931c8c5bff3ce0ba866efd0d474684e1a339c61f7118db8ef1a0bab6"
	I0110 02:44:03.168503  203291 cri.go:96] found id: "819d19fdf12b7024ad76194b4c06bf105258b02ae91cd1fbc68a9e4ad29511a7"
	I0110 02:44:03.168526  203291 cri.go:96] found id: "4a25e20e749b59fa968989a3556ab81d3a7f97cccf383c7c1210aae2311e4c20"
	I0110 02:44:03.168534  203291 cri.go:96] found id: "878284c8f494f2e5cb225e3fc8ecb321d081824e28f7239738ac7405f263ec31"
	I0110 02:44:03.168538  203291 cri.go:96] found id: ""
	I0110 02:44:03.168584  203291 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:44:03.633036  203291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:44:03.646503  203291 pause.go:52] kubelet running: false
	I0110 02:44:03.646568  203291 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:44:03.841410  203291 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:44:03.841487  203291 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:44:03.909463  203291 cri.go:96] found id: "e4bee80e037b45836e82e9804802d7c4dbc1107809a85cbfafa2e89816de854d"
	I0110 02:44:03.909485  203291 cri.go:96] found id: "2727f421ce1dca90f75e6f41a849e6e779f262f9dbb7b30e2f669a71e611aa3f"
	I0110 02:44:03.909492  203291 cri.go:96] found id: "aca38b976722762d5b41a4691f68a0f144f48e6c928381bbaff6d0b5a573fffb"
	I0110 02:44:03.909504  203291 cri.go:96] found id: "286e7b550e400fd0d858194af6e2d84faf1650cb6c86a08499e60624a568bee2"
	I0110 02:44:03.909508  203291 cri.go:96] found id: "262f20e22f472779a424dd355205ce5ab251514b5ac20a49c9e5b921a6c5371f"
	I0110 02:44:03.909511  203291 cri.go:96] found id: "6355391b1f72a7a845314325e3d914fc554a55e46c034d34f7a46d52b6942f35"
	I0110 02:44:03.909514  203291 cri.go:96] found id: "4d027e98ddd01a153e481f07a9974825140f36b8d3d38d0154f572cb3a1cd9d2"
	I0110 02:44:03.909537  203291 cri.go:96] found id: "b9d64a94931c8c5bff3ce0ba866efd0d474684e1a339c61f7118db8ef1a0bab6"
	I0110 02:44:03.909545  203291 cri.go:96] found id: "819d19fdf12b7024ad76194b4c06bf105258b02ae91cd1fbc68a9e4ad29511a7"
	I0110 02:44:03.909552  203291 cri.go:96] found id: "4a25e20e749b59fa968989a3556ab81d3a7f97cccf383c7c1210aae2311e4c20"
	I0110 02:44:03.909559  203291 cri.go:96] found id: "878284c8f494f2e5cb225e3fc8ecb321d081824e28f7239738ac7405f263ec31"
	I0110 02:44:03.909562  203291 cri.go:96] found id: ""
	I0110 02:44:03.909618  203291 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:44:03.925379  203291 out.go:203] 
	W0110 02:44:03.929469  203291 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:44:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:44:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 02:44:03.929496  203291 out.go:285] * 
	* 
	W0110 02:44:03.932379  203291 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:44:03.934947  203291 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-736081 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-736081
helpers_test.go:244: (dbg) docker inspect old-k8s-version-736081:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd",
	        "Created": "2026-01-10T02:41:32.674196479Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 200654,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:42:54.898649089Z",
	            "FinishedAt": "2026-01-10T02:42:54.092576538Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd/hosts",
	        "LogPath": "/var/lib/docker/containers/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd-json.log",
	        "Name": "/old-k8s-version-736081",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-736081:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-736081",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd",
	                "LowerDir": "/var/lib/docker/overlay2/900fe6e640aea64a7d338ba444221d388c553a58a464e856a98199bafb2b388e-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/900fe6e640aea64a7d338ba444221d388c553a58a464e856a98199bafb2b388e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/900fe6e640aea64a7d338ba444221d388c553a58a464e856a98199bafb2b388e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/900fe6e640aea64a7d338ba444221d388c553a58a464e856a98199bafb2b388e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-736081",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-736081/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-736081",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-736081",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-736081",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "490d1248c43e1611b3397942a91126dabcac62865444217718486531d8812027",
	            "SandboxKey": "/var/run/docker/netns/490d1248c43e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-736081": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:ad:57:2e:8b:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "19dbfb1518ac950f9693694cb0229451b62340819974a22c4a52e8192582b225",
	                    "EndpointID": "cf0a31f55eafc825a6ef4964ba42aa9fb9522238e272e1d4774afc7bbe00d626",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-736081",
	                        "a4844cb5bc1c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-736081 -n old-k8s-version-736081
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-736081 -n old-k8s-version-736081: exit status 2 (358.026555ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-736081 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-736081 logs -n 25: (1.240686313s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-989144 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo containerd config dump                                                                                                                                                                                                  │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo crio config                                                                                                                                                                                                             │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ delete  │ -p cilium-989144                                                                                                                                                                                                                              │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │ 10 Jan 26 02:36 UTC │
	│ start   │ -p cert-expiration-213257 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │ 10 Jan 26 02:37 UTC │
	│ start   │ -p cert-expiration-213257 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ delete  │ -p cert-expiration-213257                                                                                                                                                                                                                     │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ start   │ -p force-systemd-flag-038359 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-038359 │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │                     │
	│ delete  │ -p force-systemd-env-088457                                                                                                                                                                                                                   │ force-systemd-env-088457  │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ start   │ -p cert-options-295914 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:41 UTC │
	│ ssh     │ cert-options-295914 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ ssh     │ -p cert-options-295914 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ delete  │ -p cert-options-295914                                                                                                                                                                                                                        │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ start   │ -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:42 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-736081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │                     │
	│ stop    │ -p old-k8s-version-736081 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-736081 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:42 UTC │
	│ start   │ -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:43 UTC │
	│ image   │ old-k8s-version-736081 image list --format=json                                                                                                                                                                                               │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ pause   │ -p old-k8s-version-736081 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:42:54
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:42:54.633575  200525 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:42:54.633708  200525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:42:54.633718  200525 out.go:374] Setting ErrFile to fd 2...
	I0110 02:42:54.633724  200525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:42:54.633970  200525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:42:54.634351  200525 out.go:368] Setting JSON to false
	I0110 02:42:54.635150  200525 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5124,"bootTime":1768007851,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:42:54.635224  200525 start.go:143] virtualization:  
	I0110 02:42:54.638178  200525 out.go:179] * [old-k8s-version-736081] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:42:54.641930  200525 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:42:54.642002  200525 notify.go:221] Checking for updates...
	I0110 02:42:54.647727  200525 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:42:54.650635  200525 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:42:54.653657  200525 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:42:54.658817  200525 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:42:54.661750  200525 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:42:54.665539  200525 config.go:182] Loaded profile config "old-k8s-version-736081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 02:42:54.669327  200525 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I0110 02:42:54.672081  200525 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:42:54.694208  200525 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:42:54.694332  200525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:42:54.753077  200525 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:42:54.743907184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:42:54.753193  200525 docker.go:319] overlay module found
	I0110 02:42:54.756162  200525 out.go:179] * Using the docker driver based on existing profile
	I0110 02:42:54.758999  200525 start.go:309] selected driver: docker
	I0110 02:42:54.759013  200525 start.go:928] validating driver "docker" against &{Name:old-k8s-version-736081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-736081 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:42:54.759114  200525 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:42:54.759862  200525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:42:54.816472  200525 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:42:54.806826068 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:42:54.816824  200525 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:42:54.816862  200525 cni.go:84] Creating CNI manager for ""
	I0110 02:42:54.816916  200525 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:42:54.816958  200525 start.go:353] cluster config:
	{Name:old-k8s-version-736081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-736081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:42:54.820117  200525 out.go:179] * Starting "old-k8s-version-736081" primary control-plane node in "old-k8s-version-736081" cluster
	I0110 02:42:54.822975  200525 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:42:54.825714  200525 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:42:54.828494  200525 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 02:42:54.828551  200525 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0110 02:42:54.828564  200525 cache.go:65] Caching tarball of preloaded images
	I0110 02:42:54.828569  200525 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:42:54.828644  200525 preload.go:251] Found /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 02:42:54.828654  200525 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0110 02:42:54.828762  200525 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/config.json ...
	I0110 02:42:54.847726  200525 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:42:54.847750  200525 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:42:54.847771  200525 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:42:54.847838  200525 start.go:360] acquireMachinesLock for old-k8s-version-736081: {Name:mk5c17d262a96ce13234dbad01b409b9bd033454 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:42:54.847908  200525 start.go:364] duration metric: took 46.784µs to acquireMachinesLock for "old-k8s-version-736081"
	I0110 02:42:54.847931  200525 start.go:96] Skipping create...Using existing machine configuration
	I0110 02:42:54.847936  200525 fix.go:54] fixHost starting: 
	I0110 02:42:54.848195  200525 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Status}}
	I0110 02:42:54.864835  200525 fix.go:112] recreateIfNeeded on old-k8s-version-736081: state=Stopped err=<nil>
	W0110 02:42:54.864864  200525 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 02:42:54.868005  200525 out.go:252] * Restarting existing docker container for "old-k8s-version-736081" ...
	I0110 02:42:54.868106  200525 cli_runner.go:164] Run: docker start old-k8s-version-736081
	I0110 02:42:55.138425  200525 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Status}}
	I0110 02:42:55.162235  200525 kic.go:430] container "old-k8s-version-736081" state is running.
	I0110 02:42:55.162638  200525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-736081
	I0110 02:42:55.189680  200525 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/config.json ...
	I0110 02:42:55.190272  200525 machine.go:94] provisionDockerMachine start ...
	I0110 02:42:55.190396  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:42:55.218343  200525 main.go:144] libmachine: Using SSH client type: native
	I0110 02:42:55.221597  200525 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I0110 02:42:55.221617  200525 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:42:55.222233  200525 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54372->127.0.0.1:33053: read: connection reset by peer
	I0110 02:42:58.371411  200525 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-736081
	
	I0110 02:42:58.371436  200525 ubuntu.go:182] provisioning hostname "old-k8s-version-736081"
	I0110 02:42:58.371501  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:42:58.389421  200525 main.go:144] libmachine: Using SSH client type: native
	I0110 02:42:58.389743  200525 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I0110 02:42:58.389763  200525 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-736081 && echo "old-k8s-version-736081" | sudo tee /etc/hostname
	I0110 02:42:58.548692  200525 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-736081
	
	I0110 02:42:58.548788  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:42:58.566821  200525 main.go:144] libmachine: Using SSH client type: native
	I0110 02:42:58.567131  200525 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I0110 02:42:58.567147  200525 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-736081' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-736081/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-736081' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:42:58.711947  200525 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:42:58.711970  200525 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 02:42:58.712006  200525 ubuntu.go:190] setting up certificates
	I0110 02:42:58.712016  200525 provision.go:84] configureAuth start
	I0110 02:42:58.712094  200525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-736081
	I0110 02:42:58.728834  200525 provision.go:143] copyHostCerts
	I0110 02:42:58.728902  200525 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem, removing ...
	I0110 02:42:58.728922  200525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:42:58.729021  200525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 02:42:58.729124  200525 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem, removing ...
	I0110 02:42:58.729135  200525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:42:58.729162  200525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 02:42:58.729224  200525 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem, removing ...
	I0110 02:42:58.729232  200525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:42:58.729256  200525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 02:42:58.729306  200525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-736081 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-736081]
	I0110 02:42:59.175449  200525 provision.go:177] copyRemoteCerts
	I0110 02:42:59.175513  200525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:42:59.175551  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:42:59.193338  200525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:42:59.295747  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0110 02:42:59.312951  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 02:42:59.330108  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:42:59.347291  200525 provision.go:87] duration metric: took 635.254974ms to configureAuth
	I0110 02:42:59.347316  200525 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:42:59.347503  200525 config.go:182] Loaded profile config "old-k8s-version-736081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 02:42:59.347606  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:42:59.364905  200525 main.go:144] libmachine: Using SSH client type: native
	I0110 02:42:59.365233  200525 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I0110 02:42:59.365256  200525 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:42:59.710693  200525 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:42:59.710720  200525 machine.go:97] duration metric: took 4.520433147s to provisionDockerMachine
	I0110 02:42:59.710732  200525 start.go:293] postStartSetup for "old-k8s-version-736081" (driver="docker")
	I0110 02:42:59.710761  200525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:42:59.710846  200525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:42:59.710911  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:42:59.734271  200525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:42:59.839432  200525 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:42:59.842673  200525 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:42:59.842703  200525 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:42:59.842715  200525 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:42:59.842770  200525 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:42:59.842858  200525 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:42:59.842960  200525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:42:59.850304  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:42:59.867482  200525 start.go:296] duration metric: took 156.734698ms for postStartSetup
	I0110 02:42:59.867607  200525 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:42:59.867668  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:42:59.884537  200525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:42:59.984906  200525 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:42:59.989530  200525 fix.go:56] duration metric: took 5.141587404s for fixHost
	I0110 02:42:59.989552  200525 start.go:83] releasing machines lock for "old-k8s-version-736081", held for 5.141632925s
	I0110 02:42:59.989630  200525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-736081
	I0110 02:43:00.013275  200525 ssh_runner.go:195] Run: cat /version.json
	I0110 02:43:00.013335  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:43:00.014350  200525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:43:00.014428  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:43:00.036387  200525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:43:00.078395  200525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:43:00.224167  200525 ssh_runner.go:195] Run: systemctl --version
	I0110 02:43:00.232595  200525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:43:00.330673  200525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:43:00.436471  200525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:43:00.436555  200525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:43:00.446196  200525 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:43:00.446221  200525 start.go:496] detecting cgroup driver to use...
	I0110 02:43:00.446253  200525 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:43:00.446316  200525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:43:00.462938  200525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:43:00.478734  200525 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:43:00.478794  200525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:43:00.497393  200525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:43:00.510650  200525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:43:00.617297  200525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:43:00.726156  200525 docker.go:234] disabling docker service ...
	I0110 02:43:00.726215  200525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:43:00.741213  200525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:43:00.754491  200525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:43:00.859761  200525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:43:00.979051  200525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:43:00.991285  200525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:43:01.004081  200525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0110 02:43:01.004162  200525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:43:01.014637  200525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:43:01.014737  200525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:43:01.023931  200525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:43:01.032778  200525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:43:01.041833  200525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:43:01.049878  200525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:43:01.058854  200525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:43:01.066793  200525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:43:01.075160  200525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:43:01.082847  200525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:43:01.090164  200525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:43:01.216371  200525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:43:01.410063  200525 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:43:01.410184  200525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:43:01.414145  200525 start.go:574] Will wait 60s for crictl version
	I0110 02:43:01.414249  200525 ssh_runner.go:195] Run: which crictl
	I0110 02:43:01.419081  200525 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:43:01.445627  200525 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:43:01.445772  200525 ssh_runner.go:195] Run: crio --version
	I0110 02:43:01.477017  200525 ssh_runner.go:195] Run: crio --version
	I0110 02:43:01.514666  200525 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.35.0 ...
	I0110 02:43:01.517606  200525 cli_runner.go:164] Run: docker network inspect old-k8s-version-736081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:43:01.538061  200525 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:43:01.542277  200525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:43:01.552121  200525 kubeadm.go:884] updating cluster {Name:old-k8s-version-736081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-736081 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:43:01.552232  200525 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 02:43:01.552282  200525 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:43:01.586928  200525 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:43:01.586956  200525 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:43:01.587017  200525 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:43:01.613524  200525 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:43:01.613549  200525 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:43:01.613559  200525 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I0110 02:43:01.613760  200525 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-736081 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-736081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:43:01.613855  200525 ssh_runner.go:195] Run: crio config
	I0110 02:43:01.673961  200525 cni.go:84] Creating CNI manager for ""
	I0110 02:43:01.673989  200525 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:43:01.674009  200525 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:43:01.674048  200525 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-736081 NodeName:old-k8s-version-736081 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:43:01.674202  200525 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-736081"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:43:01.674293  200525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0110 02:43:01.683657  200525 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:43:01.683730  200525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:43:01.694414  200525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0110 02:43:01.707601  200525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:43:01.720365  200525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0110 02:43:01.734290  200525 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:43:01.737903  200525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:43:01.747462  200525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:43:01.854769  200525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:43:01.870975  200525 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081 for IP: 192.168.76.2
	I0110 02:43:01.870997  200525 certs.go:195] generating shared ca certs ...
	I0110 02:43:01.871040  200525 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:43:01.871227  200525 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:43:01.871297  200525 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:43:01.871310  200525 certs.go:257] generating profile certs ...
	I0110 02:43:01.871423  200525 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.key
	I0110 02:43:01.871518  200525 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/apiserver.key.ee08958c
	I0110 02:43:01.871591  200525 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/proxy-client.key
	I0110 02:43:01.871724  200525 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:43:01.871777  200525 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:43:01.871820  200525 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:43:01.871877  200525 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:43:01.871909  200525 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:43:01.871958  200525 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:43:01.872027  200525 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:43:01.872704  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:43:01.895944  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:43:01.913973  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:43:01.958899  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:43:01.991190  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0110 02:43:02.016789  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:43:02.037610  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:43:02.059313  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:43:02.079903  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:43:02.112890  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:43:02.135080  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:43:02.154926  200525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:43:02.168633  200525 ssh_runner.go:195] Run: openssl version
	I0110 02:43:02.174759  200525 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:43:02.182210  200525 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:43:02.189800  200525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:43:02.193703  200525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:43:02.193817  200525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:43:02.236461  200525 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:43:02.244163  200525 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:43:02.251892  200525 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:43:02.259592  200525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:43:02.263933  200525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:43:02.264001  200525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:43:02.304915  200525 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:43:02.312819  200525 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:43:02.320956  200525 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:43:02.329305  200525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:43:02.333301  200525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:43:02.333417  200525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:43:02.377069  200525 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:43:02.384861  200525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:43:02.388656  200525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:43:02.430744  200525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:43:02.471536  200525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:43:02.512720  200525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:43:02.562523  200525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:43:02.635309  200525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:43:02.744784  200525 kubeadm.go:401] StartCluster: {Name:old-k8s-version-736081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-736081 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:43:02.744924  200525 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:43:02.745018  200525 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:43:02.820520  200525 cri.go:96] found id: "6355391b1f72a7a845314325e3d914fc554a55e46c034d34f7a46d52b6942f35"
	I0110 02:43:02.820595  200525 cri.go:96] found id: "4d027e98ddd01a153e481f07a9974825140f36b8d3d38d0154f572cb3a1cd9d2"
	I0110 02:43:02.820614  200525 cri.go:96] found id: "b9d64a94931c8c5bff3ce0ba866efd0d474684e1a339c61f7118db8ef1a0bab6"
	I0110 02:43:02.820630  200525 cri.go:96] found id: "819d19fdf12b7024ad76194b4c06bf105258b02ae91cd1fbc68a9e4ad29511a7"
	I0110 02:43:02.820662  200525 cri.go:96] found id: ""
	I0110 02:43:02.820747  200525 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 02:43:02.833517  200525 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:43:02Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:43:02.833648  200525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:43:02.849987  200525 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:43:02.850059  200525 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:43:02.850150  200525 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:43:02.862105  200525 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:43:02.862612  200525 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-736081" does not appear in /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:43:02.862770  200525 kubeconfig.go:62] /home/jenkins/minikube-integration/22414-2353/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-736081" cluster setting kubeconfig missing "old-k8s-version-736081" context setting]
	I0110 02:43:02.863138  200525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:43:02.864688  200525 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:43:02.875539  200525 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 02:43:02.875615  200525 kubeadm.go:602] duration metric: took 25.536815ms to restartPrimaryControlPlane
	I0110 02:43:02.875640  200525 kubeadm.go:403] duration metric: took 130.864119ms to StartCluster
	I0110 02:43:02.875685  200525 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:43:02.875778  200525 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:43:02.876509  200525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:43:02.876983  200525 config.go:182] Loaded profile config "old-k8s-version-736081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 02:43:02.876772  200525 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:43:02.877113  200525 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:43:02.877214  200525 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-736081"
	I0110 02:43:02.877242  200525 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-736081"
	W0110 02:43:02.877273  200525 addons.go:248] addon storage-provisioner should already be in state true
	I0110 02:43:02.877311  200525 host.go:66] Checking if "old-k8s-version-736081" exists ...
	I0110 02:43:02.878156  200525 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Status}}
	I0110 02:43:02.878725  200525 addons.go:70] Setting dashboard=true in profile "old-k8s-version-736081"
	I0110 02:43:02.878758  200525 addons.go:239] Setting addon dashboard=true in "old-k8s-version-736081"
	W0110 02:43:02.878766  200525 addons.go:248] addon dashboard should already be in state true
	I0110 02:43:02.878801  200525 host.go:66] Checking if "old-k8s-version-736081" exists ...
	I0110 02:43:02.878846  200525 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-736081"
	I0110 02:43:02.878886  200525 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-736081"
	I0110 02:43:02.879215  200525 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Status}}
	I0110 02:43:02.879271  200525 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Status}}
	I0110 02:43:02.887863  200525 out.go:179] * Verifying Kubernetes components...
	I0110 02:43:02.892050  200525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:43:02.936117  200525 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:43:02.940072  200525 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-736081"
	W0110 02:43:02.940100  200525 addons.go:248] addon default-storageclass should already be in state true
	I0110 02:43:02.940128  200525 host.go:66] Checking if "old-k8s-version-736081" exists ...
	I0110 02:43:02.940627  200525 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Status}}
	I0110 02:43:02.944453  200525 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:43:02.944475  200525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:43:02.944532  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:43:02.961272  200525 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 02:43:02.964217  200525 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 02:43:02.967874  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 02:43:02.967897  200525 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 02:43:02.967957  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:43:02.986518  200525 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:43:02.986538  200525 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:43:02.986868  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:43:03.023715  200525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:43:03.058446  200525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:43:03.068181  200525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:43:03.241869  200525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:43:03.273744  200525 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-736081" to be "Ready" ...
	I0110 02:43:03.319693  200525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:43:03.402460  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 02:43:03.402528  200525 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 02:43:03.410620  200525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:43:03.473718  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 02:43:03.473789  200525 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 02:43:03.550710  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 02:43:03.550742  200525 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 02:43:03.619069  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 02:43:03.619093  200525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 02:43:03.672917  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 02:43:03.672944  200525 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 02:43:03.697169  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 02:43:03.697208  200525 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 02:43:03.715731  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 02:43:03.715829  200525 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 02:43:03.745811  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 02:43:03.745831  200525 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 02:43:03.769201  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:43:03.769271  200525 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 02:43:03.791374  200525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:43:08.356532  200525 node_ready.go:49] node "old-k8s-version-736081" is "Ready"
	I0110 02:43:08.356559  200525 node_ready.go:38] duration metric: took 5.082773305s for node "old-k8s-version-736081" to be "Ready" ...
	I0110 02:43:08.356573  200525 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:43:08.356631  200525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:43:09.669760  200525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.350034323s)
	I0110 02:43:09.669827  200525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.259149257s)
	I0110 02:43:10.313612  200525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.522153535s)
	I0110 02:43:10.313804  200525 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.957162651s)
	I0110 02:43:10.313827  200525 api_server.go:72] duration metric: took 7.436743343s to wait for apiserver process to appear ...
	I0110 02:43:10.313834  200525 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:43:10.313850  200525 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:43:10.316785  200525 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-736081 addons enable metrics-server
	
	I0110 02:43:10.319810  200525 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I0110 02:43:10.321969  200525 addons.go:530] duration metric: took 7.444856176s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I0110 02:43:10.332459  200525 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:43:10.334945  200525 api_server.go:141] control plane version: v1.28.0
	I0110 02:43:10.334969  200525 api_server.go:131] duration metric: took 21.13003ms to wait for apiserver health ...
	I0110 02:43:10.334979  200525 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:43:10.341690  200525 system_pods.go:59] 8 kube-system pods found
	I0110 02:43:10.341736  200525 system_pods.go:61] "coredns-5dd5756b68-5nbj4" [eef91741-4e4f-4500-8bfb-fc6218330aa6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:43:10.341746  200525 system_pods.go:61] "etcd-old-k8s-version-736081" [98f5a6d4-ccda-4c98-8e24-94a1c2e4ddd0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:43:10.341757  200525 system_pods.go:61] "kindnet-gx95x" [60c01285-ba95-4583-a0c0-b55ef4afab1f] Running
	I0110 02:43:10.341765  200525 system_pods.go:61] "kube-apiserver-old-k8s-version-736081" [ff94f7cb-219a-422e-af5a-93d1e5995c4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:43:10.341779  200525 system_pods.go:61] "kube-controller-manager-old-k8s-version-736081" [21c33917-1238-4581-bb07-ec7199814749] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:43:10.341785  200525 system_pods.go:61] "kube-proxy-kngxj" [121d4fcc-5f3d-4c31-9cec-a560afde1be1] Running
	I0110 02:43:10.341798  200525 system_pods.go:61] "kube-scheduler-old-k8s-version-736081" [f644c304-2337-4ddf-a914-02772d8ad2ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:43:10.341805  200525 system_pods.go:61] "storage-provisioner" [cc657c06-3c7c-4be5-849e-1b627d06ed63] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:43:10.341815  200525 system_pods.go:74] duration metric: took 6.831014ms to wait for pod list to return data ...
	I0110 02:43:10.341824  200525 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:43:10.347398  200525 default_sa.go:45] found service account: "default"
	I0110 02:43:10.347427  200525 default_sa.go:55] duration metric: took 5.595644ms for default service account to be created ...
	I0110 02:43:10.347441  200525 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:43:10.354024  200525 system_pods.go:86] 8 kube-system pods found
	I0110 02:43:10.354064  200525 system_pods.go:89] "coredns-5dd5756b68-5nbj4" [eef91741-4e4f-4500-8bfb-fc6218330aa6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:43:10.354083  200525 system_pods.go:89] "etcd-old-k8s-version-736081" [98f5a6d4-ccda-4c98-8e24-94a1c2e4ddd0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:43:10.354089  200525 system_pods.go:89] "kindnet-gx95x" [60c01285-ba95-4583-a0c0-b55ef4afab1f] Running
	I0110 02:43:10.354098  200525 system_pods.go:89] "kube-apiserver-old-k8s-version-736081" [ff94f7cb-219a-422e-af5a-93d1e5995c4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:43:10.354105  200525 system_pods.go:89] "kube-controller-manager-old-k8s-version-736081" [21c33917-1238-4581-bb07-ec7199814749] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:43:10.354111  200525 system_pods.go:89] "kube-proxy-kngxj" [121d4fcc-5f3d-4c31-9cec-a560afde1be1] Running
	I0110 02:43:10.354117  200525 system_pods.go:89] "kube-scheduler-old-k8s-version-736081" [f644c304-2337-4ddf-a914-02772d8ad2ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:43:10.354122  200525 system_pods.go:89] "storage-provisioner" [cc657c06-3c7c-4be5-849e-1b627d06ed63] Running
	I0110 02:43:10.354130  200525 system_pods.go:126] duration metric: took 6.682628ms to wait for k8s-apps to be running ...
	I0110 02:43:10.354141  200525 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:43:10.354209  200525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:43:10.367565  200525 system_svc.go:56] duration metric: took 13.411237ms WaitForService to wait for kubelet
	I0110 02:43:10.367595  200525 kubeadm.go:587] duration metric: took 7.490509165s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:43:10.367615  200525 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:43:10.372471  200525 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:43:10.372552  200525 node_conditions.go:123] node cpu capacity is 2
	I0110 02:43:10.372581  200525 node_conditions.go:105] duration metric: took 4.960075ms to run NodePressure ...
	I0110 02:43:10.372607  200525 start.go:242] waiting for startup goroutines ...
	I0110 02:43:10.372641  200525 start.go:247] waiting for cluster config update ...
	I0110 02:43:10.372671  200525 start.go:256] writing updated cluster config ...
	I0110 02:43:10.372995  200525 ssh_runner.go:195] Run: rm -f paused
	I0110 02:43:10.377677  200525 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:43:10.384865  200525 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5nbj4" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 02:43:12.390575  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:14.391074  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:16.391175  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:18.391554  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:20.890146  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:22.890302  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:24.891181  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:27.390636  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:29.391997  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:31.890815  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:34.391000  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:36.890765  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:39.391837  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:41.890921  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:44.391354  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:46.890816  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	I0110 02:43:48.390162  200525 pod_ready.go:94] pod "coredns-5dd5756b68-5nbj4" is "Ready"
	I0110 02:43:48.390199  200525 pod_ready.go:86] duration metric: took 38.005296045s for pod "coredns-5dd5756b68-5nbj4" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:48.393328  200525 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:48.398767  200525 pod_ready.go:94] pod "etcd-old-k8s-version-736081" is "Ready"
	I0110 02:43:48.398796  200525 pod_ready.go:86] duration metric: took 5.44675ms for pod "etcd-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:48.402661  200525 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:48.408283  200525 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-736081" is "Ready"
	I0110 02:43:48.408311  200525 pod_ready.go:86] duration metric: took 5.624304ms for pod "kube-apiserver-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:48.411389  200525 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:48.588483  200525 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-736081" is "Ready"
	I0110 02:43:48.588510  200525 pod_ready.go:86] duration metric: took 177.100695ms for pod "kube-controller-manager-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:48.789206  200525 pod_ready.go:83] waiting for pod "kube-proxy-kngxj" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:49.189325  200525 pod_ready.go:94] pod "kube-proxy-kngxj" is "Ready"
	I0110 02:43:49.189404  200525 pod_ready.go:86] duration metric: took 400.164672ms for pod "kube-proxy-kngxj" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:49.389482  200525 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:49.788368  200525 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-736081" is "Ready"
	I0110 02:43:49.788397  200525 pod_ready.go:86] duration metric: took 398.889418ms for pod "kube-scheduler-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:49.788411  200525 pod_ready.go:40] duration metric: took 39.410656962s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:43:49.842337  200525 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I0110 02:43:49.845491  200525 out.go:203] 
	W0110 02:43:49.848432  200525 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I0110 02:43:49.851236  200525 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:43:49.854190  200525 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-736081" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:43:40 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:40.343165156Z" level=info msg="Started container" PID=1678 containerID=e4bee80e037b45836e82e9804802d7c4dbc1107809a85cbfafa2e89816de854d description=kube-system/storage-provisioner/storage-provisioner id=550373d7-bd8d-465a-8619-0e5a744289dd name=/runtime.v1.RuntimeService/StartContainer sandboxID=15e0c5eb7c78e29269034ebdf501306f06676846729d45ffb2dbd352e3ac4a18
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.098992442Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d11c1ca9-8a0b-4379-9000-5a309d73b39e name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.09988164Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=117d48da-4188-44fd-a8ab-ceddb07564bc name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.100950082Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6/dashboard-metrics-scraper" id=e0f2e57c-eb42-4128-885d-93dca67c51ac name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.101105902Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.11054368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.111247315Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.126056021Z" level=info msg="Created container 4a25e20e749b59fa968989a3556ab81d3a7f97cccf383c7c1210aae2311e4c20: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6/dashboard-metrics-scraper" id=e0f2e57c-eb42-4128-885d-93dca67c51ac name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.127343313Z" level=info msg="Starting container: 4a25e20e749b59fa968989a3556ab81d3a7f97cccf383c7c1210aae2311e4c20" id=6f84ec9f-8245-4001-83b2-f2a45771153b name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.129100844Z" level=info msg="Started container" PID=1689 containerID=4a25e20e749b59fa968989a3556ab81d3a7f97cccf383c7c1210aae2311e4c20 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6/dashboard-metrics-scraper id=6f84ec9f-8245-4001-83b2-f2a45771153b name=/runtime.v1.RuntimeService/StartContainer sandboxID=eb77e01a234819d5f7269f82cc0dde5f61fa3c587e8019122fa571f76ffa84eb
	Jan 10 02:43:43 old-k8s-version-736081 conmon[1687]: conmon 4a25e20e749b59fa9689 <ninfo>: container 1689 exited with status 1
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.324946582Z" level=info msg="Removing container: ace1eb7cfc15ad987dc1e486db2ebbe38360e4b5fe2e3cc347264d68e3cc1e5f" id=7952db70-a5c5-41f2-82fc-f09c7a82f9f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.332634099Z" level=info msg="Error loading conmon cgroup of container ace1eb7cfc15ad987dc1e486db2ebbe38360e4b5fe2e3cc347264d68e3cc1e5f: cgroup deleted" id=7952db70-a5c5-41f2-82fc-f09c7a82f9f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.335593246Z" level=info msg="Removed container ace1eb7cfc15ad987dc1e486db2ebbe38360e4b5fe2e3cc347264d68e3cc1e5f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6/dashboard-metrics-scraper" id=7952db70-a5c5-41f2-82fc-f09c7a82f9f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.942695414Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.942740491Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.949327476Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.949358778Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.961466652Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.961502081Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.969380575Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.969419598Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.969453238Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.980573259Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.980730432Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	4a25e20e749b5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   eb77e01a23481       dashboard-metrics-scraper-5f989dc9cf-twwn6       kubernetes-dashboard
	e4bee80e037b4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   15e0c5eb7c78e       storage-provisioner                              kube-system
	878284c8f494f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago       Running             kubernetes-dashboard        0                   69e2573409bc9       kubernetes-dashboard-8694d4445c-dx84v            kubernetes-dashboard
	091efa75ae4f1       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   48e50c5a7cb66       busybox                                          default
	2727f421ce1dc       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           55 seconds ago       Running             coredns                     1                   dba6cf90f9e40       coredns-5dd5756b68-5nbj4                         kube-system
	aca38b9767227       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           55 seconds ago       Running             kindnet-cni                 1                   cb37614168c00       kindnet-gx95x                                    kube-system
	286e7b550e400       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   15e0c5eb7c78e       storage-provisioner                              kube-system
	262f20e22f472       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           55 seconds ago       Running             kube-proxy                  1                   0b4395b21dd4c       kube-proxy-kngxj                                 kube-system
	6355391b1f72a       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   6339ee3d33708       kube-apiserver-old-k8s-version-736081            kube-system
	4d027e98ddd01       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   79c13d3432ae4       kube-scheduler-old-k8s-version-736081            kube-system
	b9d64a94931c8       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   90c1dcc813918       kube-controller-manager-old-k8s-version-736081   kube-system
	819d19fdf12b7       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   7cb3f38c3591e       etcd-old-k8s-version-736081                      kube-system
	
	
	==> coredns [2727f421ce1dca90f75e6f41a849e6e779f262f9dbb7b30e2f669a71e611aa3f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40406 - 46746 "HINFO IN 6405047081352465153.93381792023490682. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.023566967s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-736081
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-736081
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=old-k8s-version-736081
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_42_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:41:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-736081
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:43:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:43:39 +0000   Sat, 10 Jan 2026 02:41:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:43:39 +0000   Sat, 10 Jan 2026 02:41:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:43:39 +0000   Sat, 10 Jan 2026 02:41:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:43:39 +0000   Sat, 10 Jan 2026 02:42:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-736081
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                35697ab1-6362-43a3-ac4e-c38faa5c6e6d
	  Boot ID:                    41f52236-76b9-4dc8-bfe3-121fbdeb9659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-5nbj4                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-old-k8s-version-736081                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m5s
	  kube-system                 kindnet-gx95x                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-old-k8s-version-736081             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-old-k8s-version-736081    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-kngxj                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-old-k8s-version-736081             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-twwn6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-dx84v             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 111s                   kube-proxy       
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  Starting                 2m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-736081 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-736081 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-736081 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m5s                   kubelet          Node old-k8s-version-736081 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m5s                   kubelet          Node old-k8s-version-736081 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m5s                   kubelet          Node old-k8s-version-736081 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m5s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s                   node-controller  Node old-k8s-version-736081 event: Registered Node old-k8s-version-736081 in Controller
	  Normal  NodeReady                98s                    kubelet          Node old-k8s-version-736081 status is now: NodeReady
	  Normal  Starting                 63s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node old-k8s-version-736081 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node old-k8s-version-736081 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node old-k8s-version-736081 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                    node-controller  Node old-k8s-version-736081 event: Registered Node old-k8s-version-736081 in Controller
	
	
	==> dmesg <==
	[ +39.223525] overlayfs: idmapped layers are currently not supported
	[Jan10 02:09] overlayfs: idmapped layers are currently not supported
	[Jan10 02:10] overlayfs: idmapped layers are currently not supported
	[Jan10 02:14] overlayfs: idmapped layers are currently not supported
	[ +27.765975] overlayfs: idmapped layers are currently not supported
	[Jan10 02:15] overlayfs: idmapped layers are currently not supported
	[Jan10 02:16] overlayfs: idmapped layers are currently not supported
	[Jan10 02:17] overlayfs: idmapped layers are currently not supported
	[Jan10 02:18] overlayfs: idmapped layers are currently not supported
	[ +23.212409] overlayfs: idmapped layers are currently not supported
	[  +9.866169] overlayfs: idmapped layers are currently not supported
	[Jan10 02:19] overlayfs: idmapped layers are currently not supported
	[Jan10 02:20] overlayfs: idmapped layers are currently not supported
	[ +35.960240] overlayfs: idmapped layers are currently not supported
	[Jan10 02:21] overlayfs: idmapped layers are currently not supported
	[Jan10 02:23] overlayfs: idmapped layers are currently not supported
	[Jan10 02:24] overlayfs: idmapped layers are currently not supported
	[Jan10 02:25] overlayfs: idmapped layers are currently not supported
	[Jan10 02:30] overlayfs: idmapped layers are currently not supported
	[Jan10 02:31] overlayfs: idmapped layers are currently not supported
	[Jan10 02:35] overlayfs: idmapped layers are currently not supported
	[Jan10 02:37] overlayfs: idmapped layers are currently not supported
	[Jan10 02:41] overlayfs: idmapped layers are currently not supported
	[ +37.534021] overlayfs: idmapped layers are currently not supported
	[Jan10 02:43] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [819d19fdf12b7024ad76194b4c06bf105258b02ae91cd1fbc68a9e4ad29511a7] <==
	{"level":"info","ts":"2026-01-10T02:43:02.702516Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T02:43:02.702587Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2026-01-10T02:43:02.702715Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:43:02.702742Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:43:02.70275Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:43:02.702936Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:43:02.702945Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:43:02.703465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T02:43:02.703514Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2026-01-10T02:43:02.703591Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T02:43:02.703614Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T02:43:03.893032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:43:03.893087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:43:03.893114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:43:03.893133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:43:03.89314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:43:03.89315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:43:03.893157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:43:03.895453Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-736081 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:43:03.895621Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:43:03.896753Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:43:03.897151Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:43:03.898129Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T02:43:03.909693Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:43:03.909735Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 02:44:05 up  1:26,  0 user,  load average: 1.98, 1.85, 1.83
	Linux old-k8s-version-736081 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [aca38b976722762d5b41a4691f68a0f144f48e6c928381bbaff6d0b5a573fffb] <==
	I0110 02:43:09.740689       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:43:09.741375       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 02:43:09.741512       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:43:09.741532       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:43:09.741544       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:43:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:43:09.936577       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:43:09.936654       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:43:09.936689       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:43:09.937204       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0110 02:43:39.937113       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0110 02:43:39.937113       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0110 02:43:39.937153       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0110 02:43:39.937220       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I0110 02:43:41.238205       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:43:41.238236       1 metrics.go:72] Registering metrics
	I0110 02:43:41.238307       1 controller.go:711] "Syncing nftables rules"
	I0110 02:43:49.936939       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:43:49.936995       1 main.go:301] handling current node
	I0110 02:43:59.936461       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:43:59.936522       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6355391b1f72a7a845314325e3d914fc554a55e46c034d34f7a46d52b6942f35] <==
	I0110 02:43:08.375901       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0110 02:43:08.376119       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0110 02:43:08.376855       1 aggregator.go:166] initial CRD sync complete...
	I0110 02:43:08.376878       1 autoregister_controller.go:141] Starting autoregister controller
	I0110 02:43:08.376884       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:43:08.376890       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:43:08.377039       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 02:43:08.377612       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0110 02:43:08.378567       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0110 02:43:08.378580       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0110 02:43:08.378674       1 shared_informer.go:318] Caches are synced for configmaps
	I0110 02:43:08.388399       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:43:08.406111       1 shared_informer.go:318] Caches are synced for node_authorizer
	E0110 02:43:08.428781       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 02:43:09.091457       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0110 02:43:10.036941       1 controller.go:624] quota admission added evaluator for: namespaces
	I0110 02:43:10.084995       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0110 02:43:10.113857       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:43:10.125187       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:43:10.150621       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0110 02:43:10.281538       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.44.74"}
	I0110 02:43:10.305850       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.152.96"}
	I0110 02:43:21.315571       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:43:21.415491       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0110 02:43:21.514990       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [b9d64a94931c8c5bff3ce0ba866efd0d474684e1a339c61f7118db8ef1a0bab6] <==
	I0110 02:43:21.430581       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I0110 02:43:21.569420       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 02:43:21.626647       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="540.182729ms"
	I0110 02:43:21.626819       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.971µs"
	I0110 02:43:21.634608       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-twwn6"
	I0110 02:43:21.634638       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-dx84v"
	I0110 02:43:21.655027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="224.600234ms"
	I0110 02:43:21.657482       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="231.459646ms"
	I0110 02:43:21.658532       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 02:43:21.658565       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0110 02:43:21.670587       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="15.503397ms"
	I0110 02:43:21.670784       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.514µs"
	I0110 02:43:21.680346       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.079µs"
	I0110 02:43:21.703863       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.285614ms"
	I0110 02:43:21.704148       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="54.981µs"
	I0110 02:43:21.704216       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="26.313µs"
	I0110 02:43:26.291240       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.915µs"
	I0110 02:43:27.298495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.044µs"
	I0110 02:43:28.303636       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="45.504µs"
	I0110 02:43:30.320814       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="14.20439ms"
	I0110 02:43:30.321952       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="38.587µs"
	I0110 02:43:43.343175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.467µs"
	I0110 02:43:48.070458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.335076ms"
	I0110 02:43:48.071148       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.095µs"
	I0110 02:43:51.958367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.494µs"
	
	
	==> kube-proxy [262f20e22f472779a424dd355205ce5ab251514b5ac20a49c9e5b921a6c5371f] <==
	I0110 02:43:09.855541       1 server_others.go:69] "Using iptables proxy"
	I0110 02:43:09.871063       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0110 02:43:09.948343       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:43:09.950174       1 server_others.go:152] "Using iptables Proxier"
	I0110 02:43:09.950215       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0110 02:43:09.950223       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0110 02:43:09.950248       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0110 02:43:09.950467       1 server.go:846] "Version info" version="v1.28.0"
	I0110 02:43:09.950487       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:43:09.951638       1 config.go:188] "Starting service config controller"
	I0110 02:43:09.951659       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0110 02:43:09.951679       1 config.go:97] "Starting endpoint slice config controller"
	I0110 02:43:09.951683       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0110 02:43:09.955065       1 config.go:315] "Starting node config controller"
	I0110 02:43:09.955085       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0110 02:43:10.051913       1 shared_informer.go:318] Caches are synced for service config
	I0110 02:43:10.052005       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0110 02:43:10.055919       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4d027e98ddd01a153e481f07a9974825140f36b8d3d38d0154f572cb3a1cd9d2] <==
	I0110 02:43:05.810418       1 serving.go:348] Generated self-signed cert in-memory
	W0110 02:43:08.328708       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 02:43:08.328812       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 02:43:08.328845       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 02:43:08.328884       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 02:43:08.399483       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0110 02:43:08.399580       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:43:08.401573       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0110 02:43:08.403979       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:43:08.404048       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0110 02:43:08.404388       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0110 02:43:08.505409       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 10 02:43:21 old-k8s-version-736081 kubelet[788]: I0110 02:43:21.812168     788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6r2v\" (UniqueName: \"kubernetes.io/projected/97659047-3542-4db8-b917-7c614b2478b3-kube-api-access-t6r2v\") pod \"dashboard-metrics-scraper-5f989dc9cf-twwn6\" (UID: \"97659047-3542-4db8-b917-7c614b2478b3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6"
	Jan 10 02:43:21 old-k8s-version-736081 kubelet[788]: I0110 02:43:21.812323     788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fccg2\" (UniqueName: \"kubernetes.io/projected/7aa0389e-a563-4119-879e-6c8c9d6456b2-kube-api-access-fccg2\") pod \"kubernetes-dashboard-8694d4445c-dx84v\" (UID: \"7aa0389e-a563-4119-879e-6c8c9d6456b2\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dx84v"
	Jan 10 02:43:21 old-k8s-version-736081 kubelet[788]: I0110 02:43:21.812354     788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/97659047-3542-4db8-b917-7c614b2478b3-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-twwn6\" (UID: \"97659047-3542-4db8-b917-7c614b2478b3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6"
	Jan 10 02:43:21 old-k8s-version-736081 kubelet[788]: I0110 02:43:21.812385     788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7aa0389e-a563-4119-879e-6c8c9d6456b2-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-dx84v\" (UID: \"7aa0389e-a563-4119-879e-6c8c9d6456b2\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dx84v"
	Jan 10 02:43:21 old-k8s-version-736081 kubelet[788]: W0110 02:43:21.978631     788 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd/crio-eb77e01a234819d5f7269f82cc0dde5f61fa3c587e8019122fa571f76ffa84eb WatchSource:0}: Error finding container eb77e01a234819d5f7269f82cc0dde5f61fa3c587e8019122fa571f76ffa84eb: Status 404 returned error can't find the container with id eb77e01a234819d5f7269f82cc0dde5f61fa3c587e8019122fa571f76ffa84eb
	Jan 10 02:43:26 old-k8s-version-736081 kubelet[788]: I0110 02:43:26.274544     788 scope.go:117] "RemoveContainer" containerID="3ed63582a6f88f2262f6163183786fd72fc3656c15a9ad69b817f33a615c2bd1"
	Jan 10 02:43:27 old-k8s-version-736081 kubelet[788]: I0110 02:43:27.279889     788 scope.go:117] "RemoveContainer" containerID="3ed63582a6f88f2262f6163183786fd72fc3656c15a9ad69b817f33a615c2bd1"
	Jan 10 02:43:27 old-k8s-version-736081 kubelet[788]: I0110 02:43:27.280213     788 scope.go:117] "RemoveContainer" containerID="ace1eb7cfc15ad987dc1e486db2ebbe38360e4b5fe2e3cc347264d68e3cc1e5f"
	Jan 10 02:43:27 old-k8s-version-736081 kubelet[788]: E0110 02:43:27.280482     788 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-twwn6_kubernetes-dashboard(97659047-3542-4db8-b917-7c614b2478b3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6" podUID="97659047-3542-4db8-b917-7c614b2478b3"
	Jan 10 02:43:28 old-k8s-version-736081 kubelet[788]: I0110 02:43:28.283706     788 scope.go:117] "RemoveContainer" containerID="ace1eb7cfc15ad987dc1e486db2ebbe38360e4b5fe2e3cc347264d68e3cc1e5f"
	Jan 10 02:43:28 old-k8s-version-736081 kubelet[788]: E0110 02:43:28.287601     788 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-twwn6_kubernetes-dashboard(97659047-3542-4db8-b917-7c614b2478b3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6" podUID="97659047-3542-4db8-b917-7c614b2478b3"
	Jan 10 02:43:31 old-k8s-version-736081 kubelet[788]: I0110 02:43:31.943610     788 scope.go:117] "RemoveContainer" containerID="ace1eb7cfc15ad987dc1e486db2ebbe38360e4b5fe2e3cc347264d68e3cc1e5f"
	Jan 10 02:43:31 old-k8s-version-736081 kubelet[788]: E0110 02:43:31.944437     788 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-twwn6_kubernetes-dashboard(97659047-3542-4db8-b917-7c614b2478b3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6" podUID="97659047-3542-4db8-b917-7c614b2478b3"
	Jan 10 02:43:40 old-k8s-version-736081 kubelet[788]: I0110 02:43:40.311400     788 scope.go:117] "RemoveContainer" containerID="286e7b550e400fd0d858194af6e2d84faf1650cb6c86a08499e60624a568bee2"
	Jan 10 02:43:40 old-k8s-version-736081 kubelet[788]: I0110 02:43:40.334857     788 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dx84v" podStartSLOduration=11.251861535 podCreationTimestamp="2026-01-10 02:43:21 +0000 UTC" firstStartedPulling="2026-01-10 02:43:21.995080891 +0000 UTC m=+20.120301873" lastFinishedPulling="2026-01-10 02:43:30.078019278 +0000 UTC m=+28.203240261" observedRunningTime="2026-01-10 02:43:30.305509739 +0000 UTC m=+28.430730747" watchObservedRunningTime="2026-01-10 02:43:40.334799923 +0000 UTC m=+38.460020906"
	Jan 10 02:43:43 old-k8s-version-736081 kubelet[788]: I0110 02:43:43.098384     788 scope.go:117] "RemoveContainer" containerID="ace1eb7cfc15ad987dc1e486db2ebbe38360e4b5fe2e3cc347264d68e3cc1e5f"
	Jan 10 02:43:43 old-k8s-version-736081 kubelet[788]: I0110 02:43:43.322986     788 scope.go:117] "RemoveContainer" containerID="ace1eb7cfc15ad987dc1e486db2ebbe38360e4b5fe2e3cc347264d68e3cc1e5f"
	Jan 10 02:43:43 old-k8s-version-736081 kubelet[788]: I0110 02:43:43.323272     788 scope.go:117] "RemoveContainer" containerID="4a25e20e749b59fa968989a3556ab81d3a7f97cccf383c7c1210aae2311e4c20"
	Jan 10 02:43:43 old-k8s-version-736081 kubelet[788]: E0110 02:43:43.323553     788 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-twwn6_kubernetes-dashboard(97659047-3542-4db8-b917-7c614b2478b3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6" podUID="97659047-3542-4db8-b917-7c614b2478b3"
	Jan 10 02:43:51 old-k8s-version-736081 kubelet[788]: I0110 02:43:51.943892     788 scope.go:117] "RemoveContainer" containerID="4a25e20e749b59fa968989a3556ab81d3a7f97cccf383c7c1210aae2311e4c20"
	Jan 10 02:43:51 old-k8s-version-736081 kubelet[788]: E0110 02:43:51.944224     788 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-twwn6_kubernetes-dashboard(97659047-3542-4db8-b917-7c614b2478b3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6" podUID="97659047-3542-4db8-b917-7c614b2478b3"
	Jan 10 02:44:02 old-k8s-version-736081 kubelet[788]: I0110 02:44:02.149690     788 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 10 02:44:02 old-k8s-version-736081 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:44:02 old-k8s-version-736081 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:44:02 old-k8s-version-736081 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [878284c8f494f2e5cb225e3fc8ecb321d081824e28f7239738ac7405f263ec31] <==
	2026/01/10 02:43:30 Starting overwatch
	2026/01/10 02:43:30 Using namespace: kubernetes-dashboard
	2026/01/10 02:43:30 Using in-cluster config to connect to apiserver
	2026/01/10 02:43:30 Using secret token for csrf signing
	2026/01/10 02:43:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 02:43:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 02:43:30 Successful initial request to the apiserver, version: v1.28.0
	2026/01/10 02:43:30 Generating JWE encryption key
	2026/01/10 02:43:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 02:43:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 02:43:30 Initializing JWE encryption key from synchronized object
	2026/01/10 02:43:30 Creating in-cluster Sidecar client
	2026/01/10 02:43:30 Serving insecurely on HTTP port: 9090
	2026/01/10 02:43:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:44:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [286e7b550e400fd0d858194af6e2d84faf1650cb6c86a08499e60624a568bee2] <==
	I0110 02:43:09.725632       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 02:43:39.737658       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e4bee80e037b45836e82e9804802d7c4dbc1107809a85cbfafa2e89816de854d] <==
	I0110 02:43:40.356801       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:43:40.372882       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:43:40.372964       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0110 02:43:57.774275       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:43:57.774472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-736081_6cb9ea04-dff9-49c0-a719-c242d0f1ea54!
	I0110 02:43:57.774903       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd39af43-26bc-44a5-a4ee-75ffe0c111d7", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-736081_6cb9ea04-dff9-49c0-a719-c242d0f1ea54 became leader
	I0110 02:43:57.875947       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-736081_6cb9ea04-dff9-49c0-a719-c242d0f1ea54!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-736081 -n old-k8s-version-736081
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-736081 -n old-k8s-version-736081: exit status 2 (366.738793ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-736081 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-736081
helpers_test.go:244: (dbg) docker inspect old-k8s-version-736081:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd",
	        "Created": "2026-01-10T02:41:32.674196479Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 200654,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:42:54.898649089Z",
	            "FinishedAt": "2026-01-10T02:42:54.092576538Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd/hostname",
	        "HostsPath": "/var/lib/docker/containers/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd/hosts",
	        "LogPath": "/var/lib/docker/containers/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd-json.log",
	        "Name": "/old-k8s-version-736081",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-736081:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-736081",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd",
	                "LowerDir": "/var/lib/docker/overlay2/900fe6e640aea64a7d338ba444221d388c553a58a464e856a98199bafb2b388e-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/900fe6e640aea64a7d338ba444221d388c553a58a464e856a98199bafb2b388e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/900fe6e640aea64a7d338ba444221d388c553a58a464e856a98199bafb2b388e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/900fe6e640aea64a7d338ba444221d388c553a58a464e856a98199bafb2b388e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-736081",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-736081/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-736081",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-736081",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-736081",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "490d1248c43e1611b3397942a91126dabcac62865444217718486531d8812027",
	            "SandboxKey": "/var/run/docker/netns/490d1248c43e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-736081": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:ad:57:2e:8b:2b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "19dbfb1518ac950f9693694cb0229451b62340819974a22c4a52e8192582b225",
	                    "EndpointID": "cf0a31f55eafc825a6ef4964ba42aa9fb9522238e272e1d4774afc7bbe00d626",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-736081",
	                        "a4844cb5bc1c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-736081 -n old-k8s-version-736081
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-736081 -n old-k8s-version-736081: exit status 2 (369.917553ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-736081 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-736081 logs -n 25: (1.255970687s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-989144 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo containerd config dump                                                                                                                                                                                                  │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo crio config                                                                                                                                                                                                             │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ delete  │ -p cilium-989144                                                                                                                                                                                                                              │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │ 10 Jan 26 02:36 UTC │
	│ start   │ -p cert-expiration-213257 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │ 10 Jan 26 02:37 UTC │
	│ start   │ -p cert-expiration-213257 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ delete  │ -p cert-expiration-213257                                                                                                                                                                                                                     │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ start   │ -p force-systemd-flag-038359 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-038359 │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │                     │
	│ delete  │ -p force-systemd-env-088457                                                                                                                                                                                                                   │ force-systemd-env-088457  │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ start   │ -p cert-options-295914 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:41 UTC │
	│ ssh     │ cert-options-295914 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ ssh     │ -p cert-options-295914 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ delete  │ -p cert-options-295914                                                                                                                                                                                                                        │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ start   │ -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:42 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-736081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │                     │
	│ stop    │ -p old-k8s-version-736081 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-736081 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:42 UTC │
	│ start   │ -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:43 UTC │
	│ image   │ old-k8s-version-736081 image list --format=json                                                                                                                                                                                               │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ pause   │ -p old-k8s-version-736081 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:42:54
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:42:54.633575  200525 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:42:54.633708  200525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:42:54.633718  200525 out.go:374] Setting ErrFile to fd 2...
	I0110 02:42:54.633724  200525 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:42:54.633970  200525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:42:54.634351  200525 out.go:368] Setting JSON to false
	I0110 02:42:54.635150  200525 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5124,"bootTime":1768007851,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:42:54.635224  200525 start.go:143] virtualization:  
	I0110 02:42:54.638178  200525 out.go:179] * [old-k8s-version-736081] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:42:54.641930  200525 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:42:54.642002  200525 notify.go:221] Checking for updates...
	I0110 02:42:54.647727  200525 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:42:54.650635  200525 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:42:54.653657  200525 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:42:54.658817  200525 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:42:54.661750  200525 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:42:54.665539  200525 config.go:182] Loaded profile config "old-k8s-version-736081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 02:42:54.669327  200525 out.go:179] * Kubernetes 1.35.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.35.0
	I0110 02:42:54.672081  200525 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:42:54.694208  200525 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:42:54.694332  200525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:42:54.753077  200525 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:42:54.743907184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:42:54.753193  200525 docker.go:319] overlay module found
	I0110 02:42:54.756162  200525 out.go:179] * Using the docker driver based on existing profile
	I0110 02:42:54.758999  200525 start.go:309] selected driver: docker
	I0110 02:42:54.759013  200525 start.go:928] validating driver "docker" against &{Name:old-k8s-version-736081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-736081 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:42:54.759114  200525 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:42:54.759862  200525 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:42:54.816472  200525 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:42:54.806826068 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:42:54.816824  200525 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:42:54.816862  200525 cni.go:84] Creating CNI manager for ""
	I0110 02:42:54.816916  200525 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:42:54.816958  200525 start.go:353] cluster config:
	{Name:old-k8s-version-736081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-736081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:42:54.820117  200525 out.go:179] * Starting "old-k8s-version-736081" primary control-plane node in "old-k8s-version-736081" cluster
	I0110 02:42:54.822975  200525 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:42:54.825714  200525 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:42:54.828494  200525 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 02:42:54.828551  200525 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0110 02:42:54.828564  200525 cache.go:65] Caching tarball of preloaded images
	I0110 02:42:54.828569  200525 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:42:54.828644  200525 preload.go:251] Found /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 02:42:54.828654  200525 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0110 02:42:54.828762  200525 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/config.json ...
	I0110 02:42:54.847726  200525 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:42:54.847750  200525 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:42:54.847771  200525 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:42:54.847838  200525 start.go:360] acquireMachinesLock for old-k8s-version-736081: {Name:mk5c17d262a96ce13234dbad01b409b9bd033454 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:42:54.847908  200525 start.go:364] duration metric: took 46.784µs to acquireMachinesLock for "old-k8s-version-736081"
	I0110 02:42:54.847931  200525 start.go:96] Skipping create...Using existing machine configuration
	I0110 02:42:54.847936  200525 fix.go:54] fixHost starting: 
	I0110 02:42:54.848195  200525 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Status}}
	I0110 02:42:54.864835  200525 fix.go:112] recreateIfNeeded on old-k8s-version-736081: state=Stopped err=<nil>
	W0110 02:42:54.864864  200525 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 02:42:54.868005  200525 out.go:252] * Restarting existing docker container for "old-k8s-version-736081" ...
	I0110 02:42:54.868106  200525 cli_runner.go:164] Run: docker start old-k8s-version-736081
	I0110 02:42:55.138425  200525 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Status}}
	I0110 02:42:55.162235  200525 kic.go:430] container "old-k8s-version-736081" state is running.
	I0110 02:42:55.162638  200525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-736081
	I0110 02:42:55.189680  200525 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/config.json ...
	I0110 02:42:55.190272  200525 machine.go:94] provisionDockerMachine start ...
	I0110 02:42:55.190396  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:42:55.218343  200525 main.go:144] libmachine: Using SSH client type: native
	I0110 02:42:55.221597  200525 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I0110 02:42:55.221617  200525 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:42:55.222233  200525 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54372->127.0.0.1:33053: read: connection reset by peer
	I0110 02:42:58.371411  200525 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-736081
	
	I0110 02:42:58.371436  200525 ubuntu.go:182] provisioning hostname "old-k8s-version-736081"
	I0110 02:42:58.371501  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:42:58.389421  200525 main.go:144] libmachine: Using SSH client type: native
	I0110 02:42:58.389743  200525 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I0110 02:42:58.389763  200525 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-736081 && echo "old-k8s-version-736081" | sudo tee /etc/hostname
	I0110 02:42:58.548692  200525 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-736081
	
	I0110 02:42:58.548788  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:42:58.566821  200525 main.go:144] libmachine: Using SSH client type: native
	I0110 02:42:58.567131  200525 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I0110 02:42:58.567147  200525 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-736081' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-736081/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-736081' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:42:58.711947  200525 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:42:58.711970  200525 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 02:42:58.712006  200525 ubuntu.go:190] setting up certificates
	I0110 02:42:58.712016  200525 provision.go:84] configureAuth start
	I0110 02:42:58.712094  200525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-736081
	I0110 02:42:58.728834  200525 provision.go:143] copyHostCerts
	I0110 02:42:58.728902  200525 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem, removing ...
	I0110 02:42:58.728922  200525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:42:58.729021  200525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 02:42:58.729124  200525 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem, removing ...
	I0110 02:42:58.729135  200525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:42:58.729162  200525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 02:42:58.729224  200525 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem, removing ...
	I0110 02:42:58.729232  200525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:42:58.729256  200525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 02:42:58.729306  200525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-736081 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-736081]
	I0110 02:42:59.175449  200525 provision.go:177] copyRemoteCerts
	I0110 02:42:59.175513  200525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:42:59.175551  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:42:59.193338  200525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:42:59.295747  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0110 02:42:59.312951  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 02:42:59.330108  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:42:59.347291  200525 provision.go:87] duration metric: took 635.254974ms to configureAuth
	I0110 02:42:59.347316  200525 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:42:59.347503  200525 config.go:182] Loaded profile config "old-k8s-version-736081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 02:42:59.347606  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:42:59.364905  200525 main.go:144] libmachine: Using SSH client type: native
	I0110 02:42:59.365233  200525 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I0110 02:42:59.365256  200525 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:42:59.710693  200525 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:42:59.710720  200525 machine.go:97] duration metric: took 4.520433147s to provisionDockerMachine
	I0110 02:42:59.710732  200525 start.go:293] postStartSetup for "old-k8s-version-736081" (driver="docker")
	I0110 02:42:59.710761  200525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:42:59.710846  200525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:42:59.710911  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:42:59.734271  200525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:42:59.839432  200525 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:42:59.842673  200525 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:42:59.842703  200525 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:42:59.842715  200525 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:42:59.842770  200525 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:42:59.842858  200525 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:42:59.842960  200525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:42:59.850304  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:42:59.867482  200525 start.go:296] duration metric: took 156.734698ms for postStartSetup
	I0110 02:42:59.867607  200525 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:42:59.867668  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:42:59.884537  200525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:42:59.984906  200525 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:42:59.989530  200525 fix.go:56] duration metric: took 5.141587404s for fixHost
	I0110 02:42:59.989552  200525 start.go:83] releasing machines lock for "old-k8s-version-736081", held for 5.141632925s
	I0110 02:42:59.989630  200525 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-736081
	I0110 02:43:00.013275  200525 ssh_runner.go:195] Run: cat /version.json
	I0110 02:43:00.013335  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:43:00.014350  200525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:43:00.014428  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:43:00.036387  200525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:43:00.078395  200525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:43:00.224167  200525 ssh_runner.go:195] Run: systemctl --version
	I0110 02:43:00.232595  200525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:43:00.330673  200525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:43:00.436471  200525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:43:00.436555  200525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:43:00.446196  200525 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:43:00.446221  200525 start.go:496] detecting cgroup driver to use...
	I0110 02:43:00.446253  200525 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:43:00.446316  200525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:43:00.462938  200525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:43:00.478734  200525 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:43:00.478794  200525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:43:00.497393  200525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:43:00.510650  200525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:43:00.617297  200525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:43:00.726156  200525 docker.go:234] disabling docker service ...
	I0110 02:43:00.726215  200525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:43:00.741213  200525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:43:00.754491  200525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:43:00.859761  200525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:43:00.979051  200525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:43:00.991285  200525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:43:01.004081  200525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0110 02:43:01.004162  200525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:43:01.014637  200525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:43:01.014737  200525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:43:01.023931  200525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:43:01.032778  200525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:43:01.041833  200525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:43:01.049878  200525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:43:01.058854  200525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:43:01.066793  200525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:43:01.075160  200525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:43:01.082847  200525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:43:01.090164  200525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:43:01.216371  200525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:43:01.410063  200525 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:43:01.410184  200525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:43:01.414145  200525 start.go:574] Will wait 60s for crictl version
	I0110 02:43:01.414249  200525 ssh_runner.go:195] Run: which crictl
	I0110 02:43:01.419081  200525 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:43:01.445627  200525 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:43:01.445772  200525 ssh_runner.go:195] Run: crio --version
	I0110 02:43:01.477017  200525 ssh_runner.go:195] Run: crio --version
	I0110 02:43:01.514666  200525 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.35.0 ...
	I0110 02:43:01.517606  200525 cli_runner.go:164] Run: docker network inspect old-k8s-version-736081 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:43:01.538061  200525 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:43:01.542277  200525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:43:01.552121  200525 kubeadm.go:884] updating cluster {Name:old-k8s-version-736081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-736081 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:43:01.552232  200525 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 02:43:01.552282  200525 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:43:01.586928  200525 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:43:01.586956  200525 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:43:01.587017  200525 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:43:01.613524  200525 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:43:01.613549  200525 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:43:01.613559  200525 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I0110 02:43:01.613760  200525 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-736081 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-736081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:43:01.613855  200525 ssh_runner.go:195] Run: crio config
	I0110 02:43:01.673961  200525 cni.go:84] Creating CNI manager for ""
	I0110 02:43:01.673989  200525 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:43:01.674009  200525 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:43:01.674048  200525 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-736081 NodeName:old-k8s-version-736081 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:43:01.674202  200525 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-736081"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:43:01.674293  200525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I0110 02:43:01.683657  200525 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:43:01.683730  200525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:43:01.694414  200525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0110 02:43:01.707601  200525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:43:01.720365  200525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0110 02:43:01.734290  200525 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:43:01.737903  200525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:43:01.747462  200525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:43:01.854769  200525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:43:01.870975  200525 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081 for IP: 192.168.76.2
	I0110 02:43:01.870997  200525 certs.go:195] generating shared ca certs ...
	I0110 02:43:01.871040  200525 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:43:01.871227  200525 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:43:01.871297  200525 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:43:01.871310  200525 certs.go:257] generating profile certs ...
	I0110 02:43:01.871423  200525 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.key
	I0110 02:43:01.871518  200525 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/apiserver.key.ee08958c
	I0110 02:43:01.871591  200525 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/proxy-client.key
	I0110 02:43:01.871724  200525 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:43:01.871777  200525 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:43:01.871820  200525 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:43:01.871877  200525 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:43:01.871909  200525 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:43:01.871958  200525 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:43:01.872027  200525 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:43:01.872704  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:43:01.895944  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:43:01.913973  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:43:01.958899  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:43:01.991190  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0110 02:43:02.016789  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:43:02.037610  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:43:02.059313  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:43:02.079903  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:43:02.112890  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:43:02.135080  200525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:43:02.154926  200525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:43:02.168633  200525 ssh_runner.go:195] Run: openssl version
	I0110 02:43:02.174759  200525 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:43:02.182210  200525 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:43:02.189800  200525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:43:02.193703  200525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:43:02.193817  200525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:43:02.236461  200525 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:43:02.244163  200525 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:43:02.251892  200525 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:43:02.259592  200525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:43:02.263933  200525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:43:02.264001  200525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:43:02.304915  200525 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:43:02.312819  200525 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:43:02.320956  200525 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:43:02.329305  200525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:43:02.333301  200525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:43:02.333417  200525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:43:02.377069  200525 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:43:02.384861  200525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:43:02.388656  200525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:43:02.430744  200525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:43:02.471536  200525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:43:02.512720  200525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:43:02.562523  200525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:43:02.635309  200525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:43:02.744784  200525 kubeadm.go:401] StartCluster: {Name:old-k8s-version-736081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-736081 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:43:02.744924  200525 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:43:02.745018  200525 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:43:02.820520  200525 cri.go:96] found id: "6355391b1f72a7a845314325e3d914fc554a55e46c034d34f7a46d52b6942f35"
	I0110 02:43:02.820595  200525 cri.go:96] found id: "4d027e98ddd01a153e481f07a9974825140f36b8d3d38d0154f572cb3a1cd9d2"
	I0110 02:43:02.820614  200525 cri.go:96] found id: "b9d64a94931c8c5bff3ce0ba866efd0d474684e1a339c61f7118db8ef1a0bab6"
	I0110 02:43:02.820630  200525 cri.go:96] found id: "819d19fdf12b7024ad76194b4c06bf105258b02ae91cd1fbc68a9e4ad29511a7"
	I0110 02:43:02.820662  200525 cri.go:96] found id: ""
	I0110 02:43:02.820747  200525 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 02:43:02.833517  200525 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:43:02Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:43:02.833648  200525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:43:02.849987  200525 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:43:02.850059  200525 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:43:02.850150  200525 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:43:02.862105  200525 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:43:02.862612  200525 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-736081" does not appear in /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:43:02.862770  200525 kubeconfig.go:62] /home/jenkins/minikube-integration/22414-2353/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-736081" cluster setting kubeconfig missing "old-k8s-version-736081" context setting]
	I0110 02:43:02.863138  200525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:43:02.864688  200525 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:43:02.875539  200525 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 02:43:02.875615  200525 kubeadm.go:602] duration metric: took 25.536815ms to restartPrimaryControlPlane
	I0110 02:43:02.875640  200525 kubeadm.go:403] duration metric: took 130.864119ms to StartCluster
	I0110 02:43:02.875685  200525 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:43:02.875778  200525 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:43:02.876509  200525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:43:02.876983  200525 config.go:182] Loaded profile config "old-k8s-version-736081": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 02:43:02.876772  200525 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:43:02.877113  200525 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:43:02.877214  200525 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-736081"
	I0110 02:43:02.877242  200525 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-736081"
	W0110 02:43:02.877273  200525 addons.go:248] addon storage-provisioner should already be in state true
	I0110 02:43:02.877311  200525 host.go:66] Checking if "old-k8s-version-736081" exists ...
	I0110 02:43:02.878156  200525 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Status}}
	I0110 02:43:02.878725  200525 addons.go:70] Setting dashboard=true in profile "old-k8s-version-736081"
	I0110 02:43:02.878758  200525 addons.go:239] Setting addon dashboard=true in "old-k8s-version-736081"
	W0110 02:43:02.878766  200525 addons.go:248] addon dashboard should already be in state true
	I0110 02:43:02.878801  200525 host.go:66] Checking if "old-k8s-version-736081" exists ...
	I0110 02:43:02.878846  200525 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-736081"
	I0110 02:43:02.878886  200525 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-736081"
	I0110 02:43:02.879215  200525 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Status}}
	I0110 02:43:02.879271  200525 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Status}}
	I0110 02:43:02.887863  200525 out.go:179] * Verifying Kubernetes components...
	I0110 02:43:02.892050  200525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:43:02.936117  200525 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:43:02.940072  200525 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-736081"
	W0110 02:43:02.940100  200525 addons.go:248] addon default-storageclass should already be in state true
	I0110 02:43:02.940128  200525 host.go:66] Checking if "old-k8s-version-736081" exists ...
	I0110 02:43:02.940627  200525 cli_runner.go:164] Run: docker container inspect old-k8s-version-736081 --format={{.State.Status}}
	I0110 02:43:02.944453  200525 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:43:02.944475  200525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:43:02.944532  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:43:02.961272  200525 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 02:43:02.964217  200525 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 02:43:02.967874  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 02:43:02.967897  200525 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 02:43:02.967957  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:43:02.986518  200525 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:43:02.986538  200525 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:43:02.986868  200525 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-736081
	I0110 02:43:03.023715  200525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:43:03.058446  200525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:43:03.068181  200525 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/old-k8s-version-736081/id_rsa Username:docker}
	I0110 02:43:03.241869  200525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:43:03.273744  200525 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-736081" to be "Ready" ...
	I0110 02:43:03.319693  200525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:43:03.402460  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 02:43:03.402528  200525 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 02:43:03.410620  200525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:43:03.473718  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 02:43:03.473789  200525 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 02:43:03.550710  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 02:43:03.550742  200525 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 02:43:03.619069  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 02:43:03.619093  200525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 02:43:03.672917  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 02:43:03.672944  200525 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 02:43:03.697169  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 02:43:03.697208  200525 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 02:43:03.715731  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 02:43:03.715829  200525 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 02:43:03.745811  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 02:43:03.745831  200525 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 02:43:03.769201  200525 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:43:03.769271  200525 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 02:43:03.791374  200525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:43:08.356532  200525 node_ready.go:49] node "old-k8s-version-736081" is "Ready"
	I0110 02:43:08.356559  200525 node_ready.go:38] duration metric: took 5.082773305s for node "old-k8s-version-736081" to be "Ready" ...
	I0110 02:43:08.356573  200525 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:43:08.356631  200525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:43:09.669760  200525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.350034323s)
	I0110 02:43:09.669827  200525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.259149257s)
	I0110 02:43:10.313612  200525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.522153535s)
	I0110 02:43:10.313804  200525 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.957162651s)
	I0110 02:43:10.313827  200525 api_server.go:72] duration metric: took 7.436743343s to wait for apiserver process to appear ...
	I0110 02:43:10.313834  200525 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:43:10.313850  200525 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:43:10.316785  200525 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-736081 addons enable metrics-server
	
	I0110 02:43:10.319810  200525 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I0110 02:43:10.321969  200525 addons.go:530] duration metric: took 7.444856176s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I0110 02:43:10.332459  200525 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:43:10.334945  200525 api_server.go:141] control plane version: v1.28.0
	I0110 02:43:10.334969  200525 api_server.go:131] duration metric: took 21.13003ms to wait for apiserver health ...
	I0110 02:43:10.334979  200525 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:43:10.341690  200525 system_pods.go:59] 8 kube-system pods found
	I0110 02:43:10.341736  200525 system_pods.go:61] "coredns-5dd5756b68-5nbj4" [eef91741-4e4f-4500-8bfb-fc6218330aa6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:43:10.341746  200525 system_pods.go:61] "etcd-old-k8s-version-736081" [98f5a6d4-ccda-4c98-8e24-94a1c2e4ddd0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:43:10.341757  200525 system_pods.go:61] "kindnet-gx95x" [60c01285-ba95-4583-a0c0-b55ef4afab1f] Running
	I0110 02:43:10.341765  200525 system_pods.go:61] "kube-apiserver-old-k8s-version-736081" [ff94f7cb-219a-422e-af5a-93d1e5995c4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:43:10.341779  200525 system_pods.go:61] "kube-controller-manager-old-k8s-version-736081" [21c33917-1238-4581-bb07-ec7199814749] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:43:10.341785  200525 system_pods.go:61] "kube-proxy-kngxj" [121d4fcc-5f3d-4c31-9cec-a560afde1be1] Running
	I0110 02:43:10.341798  200525 system_pods.go:61] "kube-scheduler-old-k8s-version-736081" [f644c304-2337-4ddf-a914-02772d8ad2ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:43:10.341805  200525 system_pods.go:61] "storage-provisioner" [cc657c06-3c7c-4be5-849e-1b627d06ed63] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:43:10.341815  200525 system_pods.go:74] duration metric: took 6.831014ms to wait for pod list to return data ...
	I0110 02:43:10.341824  200525 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:43:10.347398  200525 default_sa.go:45] found service account: "default"
	I0110 02:43:10.347427  200525 default_sa.go:55] duration metric: took 5.595644ms for default service account to be created ...
	I0110 02:43:10.347441  200525 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:43:10.354024  200525 system_pods.go:86] 8 kube-system pods found
	I0110 02:43:10.354064  200525 system_pods.go:89] "coredns-5dd5756b68-5nbj4" [eef91741-4e4f-4500-8bfb-fc6218330aa6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:43:10.354083  200525 system_pods.go:89] "etcd-old-k8s-version-736081" [98f5a6d4-ccda-4c98-8e24-94a1c2e4ddd0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:43:10.354089  200525 system_pods.go:89] "kindnet-gx95x" [60c01285-ba95-4583-a0c0-b55ef4afab1f] Running
	I0110 02:43:10.354098  200525 system_pods.go:89] "kube-apiserver-old-k8s-version-736081" [ff94f7cb-219a-422e-af5a-93d1e5995c4f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:43:10.354105  200525 system_pods.go:89] "kube-controller-manager-old-k8s-version-736081" [21c33917-1238-4581-bb07-ec7199814749] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:43:10.354111  200525 system_pods.go:89] "kube-proxy-kngxj" [121d4fcc-5f3d-4c31-9cec-a560afde1be1] Running
	I0110 02:43:10.354117  200525 system_pods.go:89] "kube-scheduler-old-k8s-version-736081" [f644c304-2337-4ddf-a914-02772d8ad2ed] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:43:10.354122  200525 system_pods.go:89] "storage-provisioner" [cc657c06-3c7c-4be5-849e-1b627d06ed63] Running
	I0110 02:43:10.354130  200525 system_pods.go:126] duration metric: took 6.682628ms to wait for k8s-apps to be running ...
	I0110 02:43:10.354141  200525 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:43:10.354209  200525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:43:10.367565  200525 system_svc.go:56] duration metric: took 13.411237ms WaitForService to wait for kubelet
	I0110 02:43:10.367595  200525 kubeadm.go:587] duration metric: took 7.490509165s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:43:10.367615  200525 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:43:10.372471  200525 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:43:10.372552  200525 node_conditions.go:123] node cpu capacity is 2
	I0110 02:43:10.372581  200525 node_conditions.go:105] duration metric: took 4.960075ms to run NodePressure ...
	I0110 02:43:10.372607  200525 start.go:242] waiting for startup goroutines ...
	I0110 02:43:10.372641  200525 start.go:247] waiting for cluster config update ...
	I0110 02:43:10.372671  200525 start.go:256] writing updated cluster config ...
	I0110 02:43:10.372995  200525 ssh_runner.go:195] Run: rm -f paused
	I0110 02:43:10.377677  200525 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:43:10.384865  200525 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5nbj4" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 02:43:12.390575  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:14.391074  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:16.391175  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:18.391554  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:20.890146  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:22.890302  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:24.891181  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:27.390636  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:29.391997  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:31.890815  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:34.391000  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:36.890765  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:39.391837  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:41.890921  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:44.391354  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	W0110 02:43:46.890816  200525 pod_ready.go:104] pod "coredns-5dd5756b68-5nbj4" is not "Ready", error: <nil>
	I0110 02:43:48.390162  200525 pod_ready.go:94] pod "coredns-5dd5756b68-5nbj4" is "Ready"
	I0110 02:43:48.390199  200525 pod_ready.go:86] duration metric: took 38.005296045s for pod "coredns-5dd5756b68-5nbj4" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:48.393328  200525 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:48.398767  200525 pod_ready.go:94] pod "etcd-old-k8s-version-736081" is "Ready"
	I0110 02:43:48.398796  200525 pod_ready.go:86] duration metric: took 5.44675ms for pod "etcd-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:48.402661  200525 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:48.408283  200525 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-736081" is "Ready"
	I0110 02:43:48.408311  200525 pod_ready.go:86] duration metric: took 5.624304ms for pod "kube-apiserver-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:48.411389  200525 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:48.588483  200525 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-736081" is "Ready"
	I0110 02:43:48.588510  200525 pod_ready.go:86] duration metric: took 177.100695ms for pod "kube-controller-manager-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:48.789206  200525 pod_ready.go:83] waiting for pod "kube-proxy-kngxj" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:49.189325  200525 pod_ready.go:94] pod "kube-proxy-kngxj" is "Ready"
	I0110 02:43:49.189404  200525 pod_ready.go:86] duration metric: took 400.164672ms for pod "kube-proxy-kngxj" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:49.389482  200525 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:49.788368  200525 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-736081" is "Ready"
	I0110 02:43:49.788397  200525 pod_ready.go:86] duration metric: took 398.889418ms for pod "kube-scheduler-old-k8s-version-736081" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:43:49.788411  200525 pod_ready.go:40] duration metric: took 39.410656962s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:43:49.842337  200525 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I0110 02:43:49.845491  200525 out.go:203] 
	W0110 02:43:49.848432  200525 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I0110 02:43:49.851236  200525 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:43:49.854190  200525 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-736081" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:43:40 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:40.343165156Z" level=info msg="Started container" PID=1678 containerID=e4bee80e037b45836e82e9804802d7c4dbc1107809a85cbfafa2e89816de854d description=kube-system/storage-provisioner/storage-provisioner id=550373d7-bd8d-465a-8619-0e5a744289dd name=/runtime.v1.RuntimeService/StartContainer sandboxID=15e0c5eb7c78e29269034ebdf501306f06676846729d45ffb2dbd352e3ac4a18
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.098992442Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d11c1ca9-8a0b-4379-9000-5a309d73b39e name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.09988164Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=117d48da-4188-44fd-a8ab-ceddb07564bc name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.100950082Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6/dashboard-metrics-scraper" id=e0f2e57c-eb42-4128-885d-93dca67c51ac name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.101105902Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.11054368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.111247315Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.126056021Z" level=info msg="Created container 4a25e20e749b59fa968989a3556ab81d3a7f97cccf383c7c1210aae2311e4c20: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6/dashboard-metrics-scraper" id=e0f2e57c-eb42-4128-885d-93dca67c51ac name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.127343313Z" level=info msg="Starting container: 4a25e20e749b59fa968989a3556ab81d3a7f97cccf383c7c1210aae2311e4c20" id=6f84ec9f-8245-4001-83b2-f2a45771153b name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.129100844Z" level=info msg="Started container" PID=1689 containerID=4a25e20e749b59fa968989a3556ab81d3a7f97cccf383c7c1210aae2311e4c20 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6/dashboard-metrics-scraper id=6f84ec9f-8245-4001-83b2-f2a45771153b name=/runtime.v1.RuntimeService/StartContainer sandboxID=eb77e01a234819d5f7269f82cc0dde5f61fa3c587e8019122fa571f76ffa84eb
	Jan 10 02:43:43 old-k8s-version-736081 conmon[1687]: conmon 4a25e20e749b59fa9689 <ninfo>: container 1689 exited with status 1
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.324946582Z" level=info msg="Removing container: ace1eb7cfc15ad987dc1e486db2ebbe38360e4b5fe2e3cc347264d68e3cc1e5f" id=7952db70-a5c5-41f2-82fc-f09c7a82f9f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.332634099Z" level=info msg="Error loading conmon cgroup of container ace1eb7cfc15ad987dc1e486db2ebbe38360e4b5fe2e3cc347264d68e3cc1e5f: cgroup deleted" id=7952db70-a5c5-41f2-82fc-f09c7a82f9f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:43:43 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:43.335593246Z" level=info msg="Removed container ace1eb7cfc15ad987dc1e486db2ebbe38360e4b5fe2e3cc347264d68e3cc1e5f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6/dashboard-metrics-scraper" id=7952db70-a5c5-41f2-82fc-f09c7a82f9f4 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.942695414Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.942740491Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.949327476Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.949358778Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.961466652Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.961502081Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.969380575Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.969419598Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.969453238Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.980573259Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:43:49 old-k8s-version-736081 crio[661]: time="2026-01-10T02:43:49.980730432Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	4a25e20e749b5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   eb77e01a23481       dashboard-metrics-scraper-5f989dc9cf-twwn6       kubernetes-dashboard
	e4bee80e037b4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   15e0c5eb7c78e       storage-provisioner                              kube-system
	878284c8f494f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago       Running             kubernetes-dashboard        0                   69e2573409bc9       kubernetes-dashboard-8694d4445c-dx84v            kubernetes-dashboard
	091efa75ae4f1       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   48e50c5a7cb66       busybox                                          default
	2727f421ce1dc       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           57 seconds ago       Running             coredns                     1                   dba6cf90f9e40       coredns-5dd5756b68-5nbj4                         kube-system
	aca38b9767227       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           57 seconds ago       Running             kindnet-cni                 1                   cb37614168c00       kindnet-gx95x                                    kube-system
	286e7b550e400       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   15e0c5eb7c78e       storage-provisioner                              kube-system
	262f20e22f472       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           57 seconds ago       Running             kube-proxy                  1                   0b4395b21dd4c       kube-proxy-kngxj                                 kube-system
	6355391b1f72a       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   6339ee3d33708       kube-apiserver-old-k8s-version-736081            kube-system
	4d027e98ddd01       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   79c13d3432ae4       kube-scheduler-old-k8s-version-736081            kube-system
	b9d64a94931c8       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   90c1dcc813918       kube-controller-manager-old-k8s-version-736081   kube-system
	819d19fdf12b7       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   7cb3f38c3591e       etcd-old-k8s-version-736081                      kube-system
	
	
	==> coredns [2727f421ce1dca90f75e6f41a849e6e779f262f9dbb7b30e2f669a71e611aa3f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40406 - 46746 "HINFO IN 6405047081352465153.93381792023490682. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.023566967s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-736081
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-736081
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=old-k8s-version-736081
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_42_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:41:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-736081
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:43:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:43:39 +0000   Sat, 10 Jan 2026 02:41:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:43:39 +0000   Sat, 10 Jan 2026 02:41:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:43:39 +0000   Sat, 10 Jan 2026 02:41:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:43:39 +0000   Sat, 10 Jan 2026 02:42:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-736081
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                35697ab1-6362-43a3-ac4e-c38faa5c6e6d
	  Boot ID:                    41f52236-76b9-4dc8-bfe3-121fbdeb9659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-5dd5756b68-5nbj4                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     114s
	  kube-system                 etcd-old-k8s-version-736081                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m7s
	  kube-system                 kindnet-gx95x                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-old-k8s-version-736081             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-controller-manager-old-k8s-version-736081    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-kngxj                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-old-k8s-version-736081             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-twwn6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-dx84v             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 113s                   kube-proxy       
	  Normal  Starting                 57s                    kube-proxy       
	  Normal  Starting                 2m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m15s (x8 over 2m15s)  kubelet          Node old-k8s-version-736081 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m15s (x8 over 2m15s)  kubelet          Node old-k8s-version-736081 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m15s (x8 over 2m15s)  kubelet          Node old-k8s-version-736081 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m7s                   kubelet          Node old-k8s-version-736081 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m7s                   kubelet          Node old-k8s-version-736081 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m7s                   kubelet          Node old-k8s-version-736081 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m7s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           115s                   node-controller  Node old-k8s-version-736081 event: Registered Node old-k8s-version-736081 in Controller
	  Normal  NodeReady                100s                   kubelet          Node old-k8s-version-736081 status is now: NodeReady
	  Normal  Starting                 65s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node old-k8s-version-736081 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node old-k8s-version-736081 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node old-k8s-version-736081 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                    node-controller  Node old-k8s-version-736081 event: Registered Node old-k8s-version-736081 in Controller
	
	
	==> dmesg <==
	[ +39.223525] overlayfs: idmapped layers are currently not supported
	[Jan10 02:09] overlayfs: idmapped layers are currently not supported
	[Jan10 02:10] overlayfs: idmapped layers are currently not supported
	[Jan10 02:14] overlayfs: idmapped layers are currently not supported
	[ +27.765975] overlayfs: idmapped layers are currently not supported
	[Jan10 02:15] overlayfs: idmapped layers are currently not supported
	[Jan10 02:16] overlayfs: idmapped layers are currently not supported
	[Jan10 02:17] overlayfs: idmapped layers are currently not supported
	[Jan10 02:18] overlayfs: idmapped layers are currently not supported
	[ +23.212409] overlayfs: idmapped layers are currently not supported
	[  +9.866169] overlayfs: idmapped layers are currently not supported
	[Jan10 02:19] overlayfs: idmapped layers are currently not supported
	[Jan10 02:20] overlayfs: idmapped layers are currently not supported
	[ +35.960240] overlayfs: idmapped layers are currently not supported
	[Jan10 02:21] overlayfs: idmapped layers are currently not supported
	[Jan10 02:23] overlayfs: idmapped layers are currently not supported
	[Jan10 02:24] overlayfs: idmapped layers are currently not supported
	[Jan10 02:25] overlayfs: idmapped layers are currently not supported
	[Jan10 02:30] overlayfs: idmapped layers are currently not supported
	[Jan10 02:31] overlayfs: idmapped layers are currently not supported
	[Jan10 02:35] overlayfs: idmapped layers are currently not supported
	[Jan10 02:37] overlayfs: idmapped layers are currently not supported
	[Jan10 02:41] overlayfs: idmapped layers are currently not supported
	[ +37.534021] overlayfs: idmapped layers are currently not supported
	[Jan10 02:43] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [819d19fdf12b7024ad76194b4c06bf105258b02ae91cd1fbc68a9e4ad29511a7] <==
	{"level":"info","ts":"2026-01-10T02:43:02.702516Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T02:43:02.702587Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2026-01-10T02:43:02.702715Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:43:02.702742Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:43:02.70275Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:43:02.702936Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:43:02.702945Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:43:02.703465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T02:43:02.703514Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2026-01-10T02:43:02.703591Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T02:43:02.703614Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T02:43:03.893032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:43:03.893087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:43:03.893114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:43:03.893133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:43:03.89314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:43:03.89315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:43:03.893157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:43:03.895453Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-736081 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:43:03.895621Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:43:03.896753Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:43:03.897151Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:43:03.898129Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T02:43:03.909693Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:43:03.909735Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 02:44:07 up  1:26,  0 user,  load average: 1.90, 1.83, 1.82
	Linux old-k8s-version-736081 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [aca38b976722762d5b41a4691f68a0f144f48e6c928381bbaff6d0b5a573fffb] <==
	I0110 02:43:09.740689       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:43:09.741375       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 02:43:09.741512       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:43:09.741532       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:43:09.741544       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:43:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:43:09.936577       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:43:09.936654       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:43:09.936689       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:43:09.937204       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0110 02:43:39.937113       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0110 02:43:39.937113       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0110 02:43:39.937153       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0110 02:43:39.937220       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I0110 02:43:41.238205       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:43:41.238236       1 metrics.go:72] Registering metrics
	I0110 02:43:41.238307       1 controller.go:711] "Syncing nftables rules"
	I0110 02:43:49.936939       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:43:49.936995       1 main.go:301] handling current node
	I0110 02:43:59.936461       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:43:59.936522       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6355391b1f72a7a845314325e3d914fc554a55e46c034d34f7a46d52b6942f35] <==
	I0110 02:43:08.375901       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0110 02:43:08.376119       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0110 02:43:08.376855       1 aggregator.go:166] initial CRD sync complete...
	I0110 02:43:08.376878       1 autoregister_controller.go:141] Starting autoregister controller
	I0110 02:43:08.376884       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:43:08.376890       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:43:08.377039       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 02:43:08.377612       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0110 02:43:08.378567       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0110 02:43:08.378580       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0110 02:43:08.378674       1 shared_informer.go:318] Caches are synced for configmaps
	I0110 02:43:08.388399       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:43:08.406111       1 shared_informer.go:318] Caches are synced for node_authorizer
	E0110 02:43:08.428781       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 02:43:09.091457       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0110 02:43:10.036941       1 controller.go:624] quota admission added evaluator for: namespaces
	I0110 02:43:10.084995       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0110 02:43:10.113857       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:43:10.125187       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:43:10.150621       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0110 02:43:10.281538       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.44.74"}
	I0110 02:43:10.305850       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.152.96"}
	I0110 02:43:21.315571       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:43:21.415491       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0110 02:43:21.514990       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [b9d64a94931c8c5bff3ce0ba866efd0d474684e1a339c61f7118db8ef1a0bab6] <==
	I0110 02:43:21.430581       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I0110 02:43:21.569420       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 02:43:21.626647       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="540.182729ms"
	I0110 02:43:21.626819       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.971µs"
	I0110 02:43:21.634608       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-twwn6"
	I0110 02:43:21.634638       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-dx84v"
	I0110 02:43:21.655027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="224.600234ms"
	I0110 02:43:21.657482       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="231.459646ms"
	I0110 02:43:21.658532       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 02:43:21.658565       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0110 02:43:21.670587       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="15.503397ms"
	I0110 02:43:21.670784       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.514µs"
	I0110 02:43:21.680346       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.079µs"
	I0110 02:43:21.703863       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.285614ms"
	I0110 02:43:21.704148       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="54.981µs"
	I0110 02:43:21.704216       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="26.313µs"
	I0110 02:43:26.291240       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.915µs"
	I0110 02:43:27.298495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.044µs"
	I0110 02:43:28.303636       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="45.504µs"
	I0110 02:43:30.320814       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="14.20439ms"
	I0110 02:43:30.321952       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="38.587µs"
	I0110 02:43:43.343175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.467µs"
	I0110 02:43:48.070458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.335076ms"
	I0110 02:43:48.071148       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.095µs"
	I0110 02:43:51.958367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="51.494µs"
	
	
	==> kube-proxy [262f20e22f472779a424dd355205ce5ab251514b5ac20a49c9e5b921a6c5371f] <==
	I0110 02:43:09.855541       1 server_others.go:69] "Using iptables proxy"
	I0110 02:43:09.871063       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0110 02:43:09.948343       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:43:09.950174       1 server_others.go:152] "Using iptables Proxier"
	I0110 02:43:09.950215       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0110 02:43:09.950223       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0110 02:43:09.950248       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0110 02:43:09.950467       1 server.go:846] "Version info" version="v1.28.0"
	I0110 02:43:09.950487       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:43:09.951638       1 config.go:188] "Starting service config controller"
	I0110 02:43:09.951659       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0110 02:43:09.951679       1 config.go:97] "Starting endpoint slice config controller"
	I0110 02:43:09.951683       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0110 02:43:09.955065       1 config.go:315] "Starting node config controller"
	I0110 02:43:09.955085       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0110 02:43:10.051913       1 shared_informer.go:318] Caches are synced for service config
	I0110 02:43:10.052005       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0110 02:43:10.055919       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4d027e98ddd01a153e481f07a9974825140f36b8d3d38d0154f572cb3a1cd9d2] <==
	I0110 02:43:05.810418       1 serving.go:348] Generated self-signed cert in-memory
	W0110 02:43:08.328708       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 02:43:08.328812       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 02:43:08.328845       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 02:43:08.328884       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 02:43:08.399483       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0110 02:43:08.399580       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:43:08.401573       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0110 02:43:08.403979       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:43:08.404048       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0110 02:43:08.404388       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0110 02:43:08.505409       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 10 02:43:21 old-k8s-version-736081 kubelet[788]: I0110 02:43:21.812168     788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6r2v\" (UniqueName: \"kubernetes.io/projected/97659047-3542-4db8-b917-7c614b2478b3-kube-api-access-t6r2v\") pod \"dashboard-metrics-scraper-5f989dc9cf-twwn6\" (UID: \"97659047-3542-4db8-b917-7c614b2478b3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6"
	Jan 10 02:43:21 old-k8s-version-736081 kubelet[788]: I0110 02:43:21.812323     788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fccg2\" (UniqueName: \"kubernetes.io/projected/7aa0389e-a563-4119-879e-6c8c9d6456b2-kube-api-access-fccg2\") pod \"kubernetes-dashboard-8694d4445c-dx84v\" (UID: \"7aa0389e-a563-4119-879e-6c8c9d6456b2\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dx84v"
	Jan 10 02:43:21 old-k8s-version-736081 kubelet[788]: I0110 02:43:21.812354     788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/97659047-3542-4db8-b917-7c614b2478b3-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-twwn6\" (UID: \"97659047-3542-4db8-b917-7c614b2478b3\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6"
	Jan 10 02:43:21 old-k8s-version-736081 kubelet[788]: I0110 02:43:21.812385     788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7aa0389e-a563-4119-879e-6c8c9d6456b2-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-dx84v\" (UID: \"7aa0389e-a563-4119-879e-6c8c9d6456b2\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dx84v"
	Jan 10 02:43:21 old-k8s-version-736081 kubelet[788]: W0110 02:43:21.978631     788 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a4844cb5bc1cd94f3d52c4f6668f9598fecccc0a3b0b284b1e46bbe7c2cb42dd/crio-eb77e01a234819d5f7269f82cc0dde5f61fa3c587e8019122fa571f76ffa84eb WatchSource:0}: Error finding container eb77e01a234819d5f7269f82cc0dde5f61fa3c587e8019122fa571f76ffa84eb: Status 404 returned error can't find the container with id eb77e01a234819d5f7269f82cc0dde5f61fa3c587e8019122fa571f76ffa84eb
	Jan 10 02:43:26 old-k8s-version-736081 kubelet[788]: I0110 02:43:26.274544     788 scope.go:117] "RemoveContainer" containerID="3ed63582a6f88f2262f6163183786fd72fc3656c15a9ad69b817f33a615c2bd1"
	Jan 10 02:43:27 old-k8s-version-736081 kubelet[788]: I0110 02:43:27.279889     788 scope.go:117] "RemoveContainer" containerID="3ed63582a6f88f2262f6163183786fd72fc3656c15a9ad69b817f33a615c2bd1"
	Jan 10 02:43:27 old-k8s-version-736081 kubelet[788]: I0110 02:43:27.280213     788 scope.go:117] "RemoveContainer" containerID="ace1eb7cfc15ad987dc1e486db2ebbe38360e4b5fe2e3cc347264d68e3cc1e5f"
	Jan 10 02:43:27 old-k8s-version-736081 kubelet[788]: E0110 02:43:27.280482     788 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-twwn6_kubernetes-dashboard(97659047-3542-4db8-b917-7c614b2478b3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6" podUID="97659047-3542-4db8-b917-7c614b2478b3"
	Jan 10 02:43:28 old-k8s-version-736081 kubelet[788]: I0110 02:43:28.283706     788 scope.go:117] "RemoveContainer" containerID="ace1eb7cfc15ad987dc1e486db2ebbe38360e4b5fe2e3cc347264d68e3cc1e5f"
	Jan 10 02:43:28 old-k8s-version-736081 kubelet[788]: E0110 02:43:28.287601     788 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-twwn6_kubernetes-dashboard(97659047-3542-4db8-b917-7c614b2478b3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6" podUID="97659047-3542-4db8-b917-7c614b2478b3"
	Jan 10 02:43:31 old-k8s-version-736081 kubelet[788]: I0110 02:43:31.943610     788 scope.go:117] "RemoveContainer" containerID="ace1eb7cfc15ad987dc1e486db2ebbe38360e4b5fe2e3cc347264d68e3cc1e5f"
	Jan 10 02:43:31 old-k8s-version-736081 kubelet[788]: E0110 02:43:31.944437     788 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-twwn6_kubernetes-dashboard(97659047-3542-4db8-b917-7c614b2478b3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6" podUID="97659047-3542-4db8-b917-7c614b2478b3"
	Jan 10 02:43:40 old-k8s-version-736081 kubelet[788]: I0110 02:43:40.311400     788 scope.go:117] "RemoveContainer" containerID="286e7b550e400fd0d858194af6e2d84faf1650cb6c86a08499e60624a568bee2"
	Jan 10 02:43:40 old-k8s-version-736081 kubelet[788]: I0110 02:43:40.334857     788 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dx84v" podStartSLOduration=11.251861535 podCreationTimestamp="2026-01-10 02:43:21 +0000 UTC" firstStartedPulling="2026-01-10 02:43:21.995080891 +0000 UTC m=+20.120301873" lastFinishedPulling="2026-01-10 02:43:30.078019278 +0000 UTC m=+28.203240261" observedRunningTime="2026-01-10 02:43:30.305509739 +0000 UTC m=+28.430730747" watchObservedRunningTime="2026-01-10 02:43:40.334799923 +0000 UTC m=+38.460020906"
	Jan 10 02:43:43 old-k8s-version-736081 kubelet[788]: I0110 02:43:43.098384     788 scope.go:117] "RemoveContainer" containerID="ace1eb7cfc15ad987dc1e486db2ebbe38360e4b5fe2e3cc347264d68e3cc1e5f"
	Jan 10 02:43:43 old-k8s-version-736081 kubelet[788]: I0110 02:43:43.322986     788 scope.go:117] "RemoveContainer" containerID="ace1eb7cfc15ad987dc1e486db2ebbe38360e4b5fe2e3cc347264d68e3cc1e5f"
	Jan 10 02:43:43 old-k8s-version-736081 kubelet[788]: I0110 02:43:43.323272     788 scope.go:117] "RemoveContainer" containerID="4a25e20e749b59fa968989a3556ab81d3a7f97cccf383c7c1210aae2311e4c20"
	Jan 10 02:43:43 old-k8s-version-736081 kubelet[788]: E0110 02:43:43.323553     788 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-twwn6_kubernetes-dashboard(97659047-3542-4db8-b917-7c614b2478b3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6" podUID="97659047-3542-4db8-b917-7c614b2478b3"
	Jan 10 02:43:51 old-k8s-version-736081 kubelet[788]: I0110 02:43:51.943892     788 scope.go:117] "RemoveContainer" containerID="4a25e20e749b59fa968989a3556ab81d3a7f97cccf383c7c1210aae2311e4c20"
	Jan 10 02:43:51 old-k8s-version-736081 kubelet[788]: E0110 02:43:51.944224     788 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-twwn6_kubernetes-dashboard(97659047-3542-4db8-b917-7c614b2478b3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-twwn6" podUID="97659047-3542-4db8-b917-7c614b2478b3"
	Jan 10 02:44:02 old-k8s-version-736081 kubelet[788]: I0110 02:44:02.149690     788 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 10 02:44:02 old-k8s-version-736081 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:44:02 old-k8s-version-736081 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:44:02 old-k8s-version-736081 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [878284c8f494f2e5cb225e3fc8ecb321d081824e28f7239738ac7405f263ec31] <==
	2026/01/10 02:43:30 Using namespace: kubernetes-dashboard
	2026/01/10 02:43:30 Using in-cluster config to connect to apiserver
	2026/01/10 02:43:30 Using secret token for csrf signing
	2026/01/10 02:43:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 02:43:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 02:43:30 Successful initial request to the apiserver, version: v1.28.0
	2026/01/10 02:43:30 Generating JWE encryption key
	2026/01/10 02:43:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 02:43:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 02:43:30 Initializing JWE encryption key from synchronized object
	2026/01/10 02:43:30 Creating in-cluster Sidecar client
	2026/01/10 02:43:30 Serving insecurely on HTTP port: 9090
	2026/01/10 02:43:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:44:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:43:30 Starting overwatch
	
	
	==> storage-provisioner [286e7b550e400fd0d858194af6e2d84faf1650cb6c86a08499e60624a568bee2] <==
	I0110 02:43:09.725632       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 02:43:39.737658       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e4bee80e037b45836e82e9804802d7c4dbc1107809a85cbfafa2e89816de854d] <==
	I0110 02:43:40.356801       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:43:40.372882       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:43:40.372964       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0110 02:43:57.774275       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:43:57.774472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-736081_6cb9ea04-dff9-49c0-a719-c242d0f1ea54!
	I0110 02:43:57.774903       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd39af43-26bc-44a5-a4ee-75ffe0c111d7", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-736081_6cb9ea04-dff9-49c0-a719-c242d0f1ea54 became leader
	I0110 02:43:57.875947       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-736081_6cb9ea04-dff9-49c0-a719-c242d0f1ea54!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-736081 -n old-k8s-version-736081
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-736081 -n old-k8s-version-736081: exit status 2 (371.150071ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-736081 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-290628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-290628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (256.234156ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:45:09Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-290628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-290628 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-290628 describe deploy/metrics-server -n kube-system: exit status 1 (81.137738ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-290628 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-290628
helpers_test.go:244: (dbg) docker inspect embed-certs-290628:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8",
	        "Created": "2026-01-10T02:44:15.973564072Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 205184,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:44:16.031659481Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8/hosts",
	        "LogPath": "/var/lib/docker/containers/23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8/23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8-json.log",
	        "Name": "/embed-certs-290628",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-290628:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-290628",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8",
	                "LowerDir": "/var/lib/docker/overlay2/ed28b0ecc2316eb191e9de699f8d778d307925e07bfe86b1703fc76d49ce1266-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ed28b0ecc2316eb191e9de699f8d778d307925e07bfe86b1703fc76d49ce1266/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ed28b0ecc2316eb191e9de699f8d778d307925e07bfe86b1703fc76d49ce1266/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ed28b0ecc2316eb191e9de699f8d778d307925e07bfe86b1703fc76d49ce1266/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-290628",
	                "Source": "/var/lib/docker/volumes/embed-certs-290628/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-290628",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-290628",
	                "name.minikube.sigs.k8s.io": "embed-certs-290628",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ebfd4457783af484b58fe9000a16c3387eb5a669d85d7e3134b2a4fb515f51f6",
	            "SandboxKey": "/var/run/docker/netns/ebfd4457783a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-290628": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:63:24:62:e8:9d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ce2734d3261bfe1e9933e2600de541689c00d14ccf15e6611f60408ce1c3af3",
	                    "EndpointID": "79bb6083b7431173421e771400f79d0bb52af923389e1a9a4733d3fb2e429cb0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-290628",
	                        "23cbe0d69bf1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-290628 -n embed-certs-290628
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-290628 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-290628 logs -n 25: (1.143560228s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-989144 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ ssh     │ -p cilium-989144 sudo crio config                                                                                                                                                                                                             │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │                     │
	│ delete  │ -p cilium-989144                                                                                                                                                                                                                              │ cilium-989144             │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │ 10 Jan 26 02:36 UTC │
	│ start   │ -p cert-expiration-213257 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │ 10 Jan 26 02:37 UTC │
	│ start   │ -p cert-expiration-213257 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ delete  │ -p cert-expiration-213257                                                                                                                                                                                                                     │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ start   │ -p force-systemd-flag-038359 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-038359 │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │                     │
	│ delete  │ -p force-systemd-env-088457                                                                                                                                                                                                                   │ force-systemd-env-088457  │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ start   │ -p cert-options-295914 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:41 UTC │
	│ ssh     │ cert-options-295914 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ ssh     │ -p cert-options-295914 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ delete  │ -p cert-options-295914                                                                                                                                                                                                                        │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ start   │ -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:42 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-736081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │                     │
	│ stop    │ -p old-k8s-version-736081 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-736081 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:42 UTC │
	│ start   │ -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:43 UTC │
	│ image   │ old-k8s-version-736081 image list --format=json                                                                                                                                                                                               │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ pause   │ -p old-k8s-version-736081 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │                     │
	│ delete  │ -p old-k8s-version-736081                                                                                                                                                                                                                     │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ delete  │ -p old-k8s-version-736081                                                                                                                                                                                                                     │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ start   │ -p embed-certs-290628 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-290628        │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ addons  │ enable metrics-server -p embed-certs-290628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-290628        │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:44:11
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:44:11.144672  204755 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:44:11.144795  204755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:44:11.144805  204755 out.go:374] Setting ErrFile to fd 2...
	I0110 02:44:11.144809  204755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:44:11.145061  204755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:44:11.145483  204755 out.go:368] Setting JSON to false
	I0110 02:44:11.146307  204755 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5201,"bootTime":1768007851,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:44:11.146384  204755 start.go:143] virtualization:  
	I0110 02:44:11.150620  204755 out.go:179] * [embed-certs-290628] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:44:11.154210  204755 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:44:11.154317  204755 notify.go:221] Checking for updates...
	I0110 02:44:11.160925  204755 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:44:11.164224  204755 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:44:11.167543  204755 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:44:11.170733  204755 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:44:11.173705  204755 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:44:11.177410  204755 config.go:182] Loaded profile config "force-systemd-flag-038359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:44:11.177539  204755 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:44:11.199577  204755 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:44:11.199692  204755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:44:11.252995  204755 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:44:11.244026629 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:44:11.253102  204755 docker.go:319] overlay module found
	I0110 02:44:11.256333  204755 out.go:179] * Using the docker driver based on user configuration
	I0110 02:44:11.259233  204755 start.go:309] selected driver: docker
	I0110 02:44:11.259247  204755 start.go:928] validating driver "docker" against <nil>
	I0110 02:44:11.259260  204755 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:44:11.260101  204755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:44:11.311586  204755 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:44:11.302699572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:44:11.311742  204755 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 02:44:11.312030  204755 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:44:11.314960  204755 out.go:179] * Using Docker driver with root privileges
	I0110 02:44:11.317894  204755 cni.go:84] Creating CNI manager for ""
	I0110 02:44:11.317971  204755 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:44:11.317988  204755 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 02:44:11.318061  204755 start.go:353] cluster config:
	{Name:embed-certs-290628 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-290628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:44:11.321162  204755 out.go:179] * Starting "embed-certs-290628" primary control-plane node in "embed-certs-290628" cluster
	I0110 02:44:11.324031  204755 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:44:11.326942  204755 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:44:11.329845  204755 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:44:11.329899  204755 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 02:44:11.329909  204755 cache.go:65] Caching tarball of preloaded images
	I0110 02:44:11.330041  204755 preload.go:251] Found /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 02:44:11.330052  204755 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:44:11.330163  204755 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/config.json ...
	I0110 02:44:11.330181  204755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/config.json: {Name:mke30036266da6f16933f804fd5d7218e492a454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:44:11.330339  204755 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:44:11.350652  204755 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:44:11.350677  204755 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:44:11.350700  204755 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:44:11.350731  204755 start.go:360] acquireMachinesLock for embed-certs-290628: {Name:mkecc1830917e603b9fb1bffd9b396deb689a507 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:44:11.350839  204755 start.go:364] duration metric: took 82.344µs to acquireMachinesLock for "embed-certs-290628"
	I0110 02:44:11.350873  204755 start.go:93] Provisioning new machine with config: &{Name:embed-certs-290628 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-290628 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:44:11.350939  204755 start.go:125] createHost starting for "" (driver="docker")
	I0110 02:44:11.354328  204755 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:44:11.354595  204755 start.go:159] libmachine.API.Create for "embed-certs-290628" (driver="docker")
	I0110 02:44:11.354633  204755 client.go:173] LocalClient.Create starting
	I0110 02:44:11.354715  204755 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem
	I0110 02:44:11.354757  204755 main.go:144] libmachine: Decoding PEM data...
	I0110 02:44:11.354778  204755 main.go:144] libmachine: Parsing certificate...
	I0110 02:44:11.354840  204755 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem
	I0110 02:44:11.354863  204755 main.go:144] libmachine: Decoding PEM data...
	I0110 02:44:11.354878  204755 main.go:144] libmachine: Parsing certificate...
	I0110 02:44:11.355224  204755 cli_runner.go:164] Run: docker network inspect embed-certs-290628 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:44:11.370698  204755 cli_runner.go:211] docker network inspect embed-certs-290628 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:44:11.370786  204755 network_create.go:284] running [docker network inspect embed-certs-290628] to gather additional debugging logs...
	I0110 02:44:11.370807  204755 cli_runner.go:164] Run: docker network inspect embed-certs-290628
	W0110 02:44:11.386224  204755 cli_runner.go:211] docker network inspect embed-certs-290628 returned with exit code 1
	I0110 02:44:11.386270  204755 network_create.go:287] error running [docker network inspect embed-certs-290628]: docker network inspect embed-certs-290628: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-290628 not found
	I0110 02:44:11.386285  204755 network_create.go:289] output of [docker network inspect embed-certs-290628]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-290628 not found
	
	** /stderr **
	I0110 02:44:11.386380  204755 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:44:11.402376  204755 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dba69832168e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:28:cf:b8:5c:6c} reservation:<nil>}
	I0110 02:44:11.402702  204755 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1bcca9209747 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d2:8b:83:a4:43:ae} reservation:<nil>}
	I0110 02:44:11.403041  204755 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-383ddfacc8f8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:26:ff:fb:97:66:af} reservation:<nil>}
	I0110 02:44:11.403432  204755 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a62430}
	I0110 02:44:11.403454  204755 network_create.go:124] attempt to create docker network embed-certs-290628 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0110 02:44:11.403506  204755 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-290628 embed-certs-290628
	I0110 02:44:11.463640  204755 network_create.go:108] docker network embed-certs-290628 192.168.76.0/24 created
	I0110 02:44:11.463675  204755 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-290628" container
	I0110 02:44:11.463767  204755 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:44:11.480371  204755 cli_runner.go:164] Run: docker volume create embed-certs-290628 --label name.minikube.sigs.k8s.io=embed-certs-290628 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:44:11.497552  204755 oci.go:103] Successfully created a docker volume embed-certs-290628
	I0110 02:44:11.497635  204755 cli_runner.go:164] Run: docker run --rm --name embed-certs-290628-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-290628 --entrypoint /usr/bin/test -v embed-certs-290628:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:44:12.055819  204755 oci.go:107] Successfully prepared a docker volume embed-certs-290628
	I0110 02:44:12.055898  204755 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:44:12.055911  204755 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:44:12.055979  204755 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-290628:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:44:15.894795  204755 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-290628:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.838777575s)
	I0110 02:44:15.894829  204755 kic.go:203] duration metric: took 3.838914835s to extract preloaded images to volume ...
	W0110 02:44:15.894974  204755 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 02:44:15.895079  204755 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:44:15.959027  204755 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-290628 --name embed-certs-290628 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-290628 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-290628 --network embed-certs-290628 --ip 192.168.76.2 --volume embed-certs-290628:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 02:44:16.262891  204755 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Running}}
	I0110 02:44:16.282295  204755 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Status}}
	I0110 02:44:16.311665  204755 cli_runner.go:164] Run: docker exec embed-certs-290628 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:44:16.369456  204755 oci.go:144] the created container "embed-certs-290628" has a running status.
	I0110 02:44:16.369507  204755 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa...
	I0110 02:44:16.561490  204755 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:44:16.588842  204755 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Status}}
	I0110 02:44:16.612298  204755 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:44:16.612325  204755 kic_runner.go:114] Args: [docker exec --privileged embed-certs-290628 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:44:16.672576  204755 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Status}}
	I0110 02:44:16.700525  204755 machine.go:94] provisionDockerMachine start ...
	I0110 02:44:16.700612  204755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:44:16.730636  204755 main.go:144] libmachine: Using SSH client type: native
	I0110 02:44:16.730957  204755 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I0110 02:44:16.730973  204755 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:44:16.731978  204755 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52302->127.0.0.1:33058: read: connection reset by peer
	I0110 02:44:19.879317  204755 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-290628
	
	I0110 02:44:19.879342  204755 ubuntu.go:182] provisioning hostname "embed-certs-290628"
	I0110 02:44:19.879402  204755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:44:19.897590  204755 main.go:144] libmachine: Using SSH client type: native
	I0110 02:44:19.897914  204755 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I0110 02:44:19.897930  204755 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-290628 && echo "embed-certs-290628" | sudo tee /etc/hostname
	I0110 02:44:20.070209  204755 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-290628
	
	I0110 02:44:20.070301  204755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:44:20.088992  204755 main.go:144] libmachine: Using SSH client type: native
	I0110 02:44:20.089335  204755 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I0110 02:44:20.089358  204755 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-290628' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-290628/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-290628' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:44:20.244014  204755 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:44:20.244085  204755 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 02:44:20.244133  204755 ubuntu.go:190] setting up certificates
	I0110 02:44:20.244169  204755 provision.go:84] configureAuth start
	I0110 02:44:20.244276  204755 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-290628
	I0110 02:44:20.262436  204755 provision.go:143] copyHostCerts
	I0110 02:44:20.262505  204755 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem, removing ...
	I0110 02:44:20.262514  204755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:44:20.262591  204755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 02:44:20.262687  204755 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem, removing ...
	I0110 02:44:20.262692  204755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:44:20.262716  204755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 02:44:20.262777  204755 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem, removing ...
	I0110 02:44:20.262782  204755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:44:20.262806  204755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 02:44:20.262865  204755 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.embed-certs-290628 san=[127.0.0.1 192.168.76.2 embed-certs-290628 localhost minikube]
	I0110 02:44:20.587934  204755 provision.go:177] copyRemoteCerts
	I0110 02:44:20.588021  204755 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:44:20.588068  204755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:44:20.605988  204755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:44:20.711463  204755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 02:44:20.733192  204755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:44:20.751207  204755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0110 02:44:20.768554  204755 provision.go:87] duration metric: took 524.344128ms to configureAuth
	I0110 02:44:20.768578  204755 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:44:20.768757  204755 config.go:182] Loaded profile config "embed-certs-290628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:44:20.768865  204755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:44:20.786961  204755 main.go:144] libmachine: Using SSH client type: native
	I0110 02:44:20.787281  204755 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I0110 02:44:20.787296  204755 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:44:21.099292  204755 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:44:21.099317  204755 machine.go:97] duration metric: took 4.398768043s to provisionDockerMachine
	I0110 02:44:21.099329  204755 client.go:176] duration metric: took 9.744681667s to LocalClient.Create
	I0110 02:44:21.099342  204755 start.go:167] duration metric: took 9.744747273s to libmachine.API.Create "embed-certs-290628"
	I0110 02:44:21.099365  204755 start.go:293] postStartSetup for "embed-certs-290628" (driver="docker")
	I0110 02:44:21.099380  204755 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:44:21.099445  204755 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:44:21.099499  204755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:44:21.117290  204755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:44:21.219468  204755 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:44:21.222685  204755 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:44:21.222756  204755 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:44:21.222774  204755 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:44:21.222831  204755 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:44:21.222921  204755 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:44:21.223023  204755 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:44:21.230160  204755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:44:21.247011  204755 start.go:296] duration metric: took 147.628086ms for postStartSetup
	I0110 02:44:21.247375  204755 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-290628
	I0110 02:44:21.263889  204755 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/config.json ...
	I0110 02:44:21.264166  204755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:44:21.264216  204755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:44:21.280601  204755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:44:21.380760  204755 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:44:21.385144  204755 start.go:128] duration metric: took 10.034191495s to createHost
	I0110 02:44:21.385253  204755 start.go:83] releasing machines lock for "embed-certs-290628", held for 10.034394648s
	I0110 02:44:21.385346  204755 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-290628
	I0110 02:44:21.405682  204755 ssh_runner.go:195] Run: cat /version.json
	I0110 02:44:21.405775  204755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:44:21.406102  204755 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:44:21.406153  204755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:44:21.432088  204755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:44:21.439902  204755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:44:21.652667  204755 ssh_runner.go:195] Run: systemctl --version
	I0110 02:44:21.659460  204755 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:44:21.693322  204755 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:44:21.697715  204755 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:44:21.697843  204755 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:44:21.724772  204755 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 02:44:21.724800  204755 start.go:496] detecting cgroup driver to use...
	I0110 02:44:21.724829  204755 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:44:21.724877  204755 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:44:21.741975  204755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:44:21.754398  204755 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:44:21.754468  204755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:44:21.771944  204755 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:44:21.788865  204755 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:44:21.906857  204755 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:44:22.036889  204755 docker.go:234] disabling docker service ...
	I0110 02:44:22.037035  204755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:44:22.060415  204755 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:44:22.074569  204755 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:44:22.196800  204755 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:44:22.313135  204755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:44:22.325694  204755 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:44:22.339693  204755 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:44:22.339787  204755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:44:22.348313  204755 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:44:22.348415  204755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:44:22.357179  204755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:44:22.365529  204755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:44:22.374833  204755 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:44:22.382672  204755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:44:22.391116  204755 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:44:22.404900  204755 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:44:22.413462  204755 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:44:22.420786  204755 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:44:22.428371  204755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:44:22.532659  204755 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:44:22.695567  204755 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:44:22.695682  204755 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:44:22.699572  204755 start.go:574] Will wait 60s for crictl version
	I0110 02:44:22.699659  204755 ssh_runner.go:195] Run: which crictl
	I0110 02:44:22.703300  204755 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:44:22.730783  204755 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:44:22.730915  204755 ssh_runner.go:195] Run: crio --version
	I0110 02:44:22.758736  204755 ssh_runner.go:195] Run: crio --version
	I0110 02:44:22.789677  204755 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:44:22.792599  204755 cli_runner.go:164] Run: docker network inspect embed-certs-290628 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:44:22.807865  204755 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:44:22.811629  204755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:44:22.821752  204755 kubeadm.go:884] updating cluster {Name:embed-certs-290628 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-290628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:44:22.821870  204755 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:44:22.821937  204755 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:44:22.858395  204755 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:44:22.858415  204755 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:44:22.858466  204755 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:44:22.886755  204755 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:44:22.886774  204755 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:44:22.886782  204755 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 02:44:22.886863  204755 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-290628 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-290628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:44:22.886937  204755 ssh_runner.go:195] Run: crio config
	I0110 02:44:22.966797  204755 cni.go:84] Creating CNI manager for ""
	I0110 02:44:22.966822  204755 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:44:22.966836  204755 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:44:22.966859  204755 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-290628 NodeName:embed-certs-290628 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:44:22.967001  204755 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-290628"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:44:22.967069  204755 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:44:22.974715  204755 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:44:22.974794  204755 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:44:22.982715  204755 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0110 02:44:22.994753  204755 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:44:23.008619  204755 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I0110 02:44:23.021568  204755 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:44:23.024859  204755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:44:23.034424  204755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:44:23.147629  204755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:44:23.164987  204755 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628 for IP: 192.168.76.2
	I0110 02:44:23.165007  204755 certs.go:195] generating shared ca certs ...
	I0110 02:44:23.165023  204755 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:44:23.165161  204755 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:44:23.165212  204755 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:44:23.165223  204755 certs.go:257] generating profile certs ...
	I0110 02:44:23.165294  204755 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/client.key
	I0110 02:44:23.165315  204755 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/client.crt with IP's: []
	I0110 02:44:23.377662  204755 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/client.crt ...
	I0110 02:44:23.377701  204755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/client.crt: {Name:mk60f3258a2fc5f80a6ab9df4a30ef8edb1a0450 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:44:23.377934  204755 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/client.key ...
	I0110 02:44:23.377948  204755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/client.key: {Name:mk29daee6daa38ebbf822b67f4e0df5c21ee0b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:44:23.378061  204755 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/apiserver.key.4427bfdd
	I0110 02:44:23.378078  204755 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/apiserver.crt.4427bfdd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 02:44:23.587687  204755 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/apiserver.crt.4427bfdd ...
	I0110 02:44:23.587717  204755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/apiserver.crt.4427bfdd: {Name:mkbf5b6066a3fdda630284d90d01153811289b42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:44:23.587905  204755 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/apiserver.key.4427bfdd ...
	I0110 02:44:23.587920  204755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/apiserver.key.4427bfdd: {Name:mk9cb1ab4ea91816c53ccc043c14c92f5cf1bc5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:44:23.588004  204755 certs.go:382] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/apiserver.crt.4427bfdd -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/apiserver.crt
	I0110 02:44:23.588077  204755 certs.go:386] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/apiserver.key.4427bfdd -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/apiserver.key
	I0110 02:44:23.588136  204755 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/proxy-client.key
	I0110 02:44:23.588153  204755 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/proxy-client.crt with IP's: []
	I0110 02:44:23.889458  204755 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/proxy-client.crt ...
	I0110 02:44:23.889504  204755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/proxy-client.crt: {Name:mka72385e8f60a33724cebce06a03325cf430e0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:44:23.889691  204755 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/proxy-client.key ...
	I0110 02:44:23.889704  204755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/proxy-client.key: {Name:mk757c71ab55524b0a727dcfd9247f7d1b41a739 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:44:23.889897  204755 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:44:23.889947  204755 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:44:23.889961  204755 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:44:23.889998  204755 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:44:23.890027  204755 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:44:23.890053  204755 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:44:23.890106  204755 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:44:23.890663  204755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:44:23.909623  204755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:44:23.934029  204755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:44:23.950912  204755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:44:23.969593  204755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0110 02:44:23.990324  204755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:44:24.008119  204755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:44:24.029280  204755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0110 02:44:24.048017  204755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:44:24.068574  204755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:44:24.087618  204755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:44:24.105434  204755 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:44:24.118415  204755 ssh_runner.go:195] Run: openssl version
	I0110 02:44:24.124612  204755 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:44:24.132598  204755 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:44:24.139888  204755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:44:24.143451  204755 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:44:24.143559  204755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:44:24.184350  204755 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:44:24.191672  204755 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:44:24.198484  204755 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:44:24.205657  204755 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:44:24.212797  204755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:44:24.216412  204755 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:44:24.216474  204755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:44:24.259447  204755 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:44:24.266914  204755 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4168.pem /etc/ssl/certs/51391683.0
	I0110 02:44:24.274284  204755 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:44:24.281375  204755 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:44:24.288985  204755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:44:24.292546  204755 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:44:24.292611  204755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:44:24.333411  204755 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:44:24.340761  204755 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41682.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:44:24.347903  204755 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:44:24.351480  204755 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:44:24.351585  204755 kubeadm.go:401] StartCluster: {Name:embed-certs-290628 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-290628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:44:24.351672  204755 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:44:24.351732  204755 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:44:24.381960  204755 cri.go:96] found id: ""
	I0110 02:44:24.382057  204755 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:44:24.389741  204755 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:44:24.396968  204755 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:44:24.397062  204755 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:44:24.404521  204755 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:44:24.404589  204755 kubeadm.go:158] found existing configuration files:
	
	I0110 02:44:24.404661  204755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:44:24.412024  204755 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:44:24.412099  204755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:44:24.420858  204755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:44:24.429711  204755 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:44:24.429778  204755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:44:24.439173  204755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:44:24.450240  204755 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:44:24.450307  204755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:44:24.458227  204755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:44:24.467707  204755 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:44:24.467768  204755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:44:24.475557  204755 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:44:24.517031  204755 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:44:24.517097  204755 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:44:24.594158  204755 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:44:24.594310  204755 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:44:24.594395  204755 kubeadm.go:319] OS: Linux
	I0110 02:44:24.594480  204755 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:44:24.594567  204755 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:44:24.594676  204755 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:44:24.594767  204755 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:44:24.594854  204755 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:44:24.594963  204755 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:44:24.595044  204755 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:44:24.595152  204755 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:44:24.595241  204755 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:44:24.675660  204755 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:44:24.675848  204755 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:44:24.675980  204755 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:44:24.682978  204755 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:44:24.689457  204755 out.go:252]   - Generating certificates and keys ...
	I0110 02:44:24.689617  204755 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:44:24.689727  204755 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:44:25.110099  204755 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:44:25.592301  204755 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:44:25.940094  204755 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 02:44:26.078466  204755 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 02:44:26.337627  204755 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 02:44:26.337920  204755 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-290628 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:44:26.601638  204755 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 02:44:26.601898  204755 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-290628 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:44:27.232227  204755 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 02:44:27.596793  204755 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 02:44:28.079312  204755 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 02:44:28.079690  204755 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:44:28.341299  204755 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:44:28.663255  204755 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:44:28.866112  204755 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:44:29.406443  204755 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:44:30.397824  204755 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:44:30.398538  204755 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:44:30.401285  204755 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:44:30.406919  204755 out.go:252]   - Booting up control plane ...
	I0110 02:44:30.407026  204755 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:44:30.407103  204755 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:44:30.407175  204755 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:44:30.428620  204755 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:44:30.428735  204755 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:44:30.432528  204755 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:44:30.436284  204755 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:44:30.436338  204755 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:44:30.580306  204755 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:44:30.580428  204755 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:44:31.588254  204755 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.008709577s
	I0110 02:44:31.591251  204755 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 02:44:31.591596  204755 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0110 02:44:31.592013  204755 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 02:44:31.592629  204755 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0110 02:44:33.101515  204755 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.508506826s
	I0110 02:44:34.652486  204755 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.059252722s
	I0110 02:44:36.595071  204755 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.002982787s
	I0110 02:44:36.635586  204755 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 02:44:36.651360  204755 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 02:44:36.680349  204755 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 02:44:36.680551  204755 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-290628 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 02:44:36.696335  204755 kubeadm.go:319] [bootstrap-token] Using token: cdxk0k.tahnxpicr11hywjv
	I0110 02:44:36.698098  204755 out.go:252]   - Configuring RBAC rules ...
	I0110 02:44:36.698223  204755 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 02:44:36.704530  204755 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 02:44:36.720437  204755 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 02:44:36.725369  204755 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 02:44:36.731381  204755 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 02:44:36.739939  204755 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 02:44:37.003856  204755 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 02:44:37.432048  204755 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 02:44:38.003144  204755 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 02:44:38.004260  204755 kubeadm.go:319] 
	I0110 02:44:38.004333  204755 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 02:44:38.004343  204755 kubeadm.go:319] 
	I0110 02:44:38.004420  204755 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 02:44:38.004432  204755 kubeadm.go:319] 
	I0110 02:44:38.004457  204755 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 02:44:38.004519  204755 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 02:44:38.004573  204755 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 02:44:38.004580  204755 kubeadm.go:319] 
	I0110 02:44:38.004634  204755 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 02:44:38.004641  204755 kubeadm.go:319] 
	I0110 02:44:38.004688  204755 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 02:44:38.004696  204755 kubeadm.go:319] 
	I0110 02:44:38.004749  204755 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 02:44:38.004830  204755 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 02:44:38.004900  204755 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 02:44:38.004908  204755 kubeadm.go:319] 
	I0110 02:44:38.004992  204755 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 02:44:38.005070  204755 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 02:44:38.005078  204755 kubeadm.go:319] 
	I0110 02:44:38.005162  204755 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token cdxk0k.tahnxpicr11hywjv \
	I0110 02:44:38.005268  204755 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e531c0ad449fbe8263f118458e677222d5af5abf33ad8c77cbc1152855f23f5 \
	I0110 02:44:38.005291  204755 kubeadm.go:319] 	--control-plane 
	I0110 02:44:38.005295  204755 kubeadm.go:319] 
	I0110 02:44:38.005380  204755 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 02:44:38.005384  204755 kubeadm.go:319] 
	I0110 02:44:38.005465  204755 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token cdxk0k.tahnxpicr11hywjv \
	I0110 02:44:38.005568  204755 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e531c0ad449fbe8263f118458e677222d5af5abf33ad8c77cbc1152855f23f5 
	I0110 02:44:38.010639  204755 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:44:38.011097  204755 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:44:38.011253  204755 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:44:38.011286  204755 cni.go:84] Creating CNI manager for ""
	I0110 02:44:38.011297  204755 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:44:38.015122  204755 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0110 02:44:38.018304  204755 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 02:44:38.022928  204755 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 02:44:38.022950  204755 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 02:44:38.039046  204755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 02:44:38.341505  204755 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 02:44:38.341697  204755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:44:38.341830  204755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-290628 minikube.k8s.io/updated_at=2026_01_10T02_44_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510 minikube.k8s.io/name=embed-certs-290628 minikube.k8s.io/primary=true
	I0110 02:44:38.473051  204755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:44:38.473112  204755 ops.go:34] apiserver oom_adj: -16
	I0110 02:44:38.973280  204755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:44:39.473206  204755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:44:39.973879  204755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:44:40.473776  204755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:44:40.973161  204755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:44:41.474112  204755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:44:41.973747  204755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:44:42.473789  204755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:44:42.973807  204755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:44:43.081777  204755 kubeadm.go:1114] duration metric: took 4.740143265s to wait for elevateKubeSystemPrivileges
	I0110 02:44:43.081809  204755 kubeadm.go:403] duration metric: took 18.730229032s to StartCluster
	I0110 02:44:43.081825  204755 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:44:43.081884  204755 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:44:43.082891  204755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:44:43.083093  204755 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:44:43.083194  204755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 02:44:43.083415  204755 config.go:182] Loaded profile config "embed-certs-290628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:44:43.083446  204755 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:44:43.083504  204755 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-290628"
	I0110 02:44:43.083522  204755 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-290628"
	I0110 02:44:43.083540  204755 host.go:66] Checking if "embed-certs-290628" exists ...
	I0110 02:44:43.084049  204755 addons.go:70] Setting default-storageclass=true in profile "embed-certs-290628"
	I0110 02:44:43.084065  204755 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-290628"
	I0110 02:44:43.084338  204755 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Status}}
	I0110 02:44:43.084727  204755 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Status}}
	I0110 02:44:43.087097  204755 out.go:179] * Verifying Kubernetes components...
	I0110 02:44:43.090047  204755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:44:43.106270  204755 addons.go:239] Setting addon default-storageclass=true in "embed-certs-290628"
	I0110 02:44:43.106318  204755 host.go:66] Checking if "embed-certs-290628" exists ...
	I0110 02:44:43.106818  204755 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Status}}
	I0110 02:44:43.124899  204755 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:44:43.128655  204755 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:44:43.128679  204755 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:44:43.128744  204755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:44:43.151782  204755 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:44:43.151838  204755 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:44:43.151897  204755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:44:43.172723  204755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:44:43.184465  204755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:44:43.514366  204755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 02:44:43.514564  204755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:44:43.531249  204755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:44:43.574530  204755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:44:44.253770  204755 node_ready.go:35] waiting up to 6m0s for node "embed-certs-290628" to be "Ready" ...
	I0110 02:44:44.254174  204755 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0110 02:44:44.484175  204755 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0110 02:44:44.487159  204755 addons.go:530] duration metric: took 1.403702188s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0110 02:44:44.757752  204755 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-290628" context rescaled to 1 replicas
	W0110 02:44:46.257084  204755 node_ready.go:57] node "embed-certs-290628" has "Ready":"False" status (will retry)
	W0110 02:44:48.257523  204755 node_ready.go:57] node "embed-certs-290628" has "Ready":"False" status (will retry)
	W0110 02:44:50.758737  204755 node_ready.go:57] node "embed-certs-290628" has "Ready":"False" status (will retry)
	I0110 02:44:54.655845  190834 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001043276s
	I0110 02:44:54.655885  190834 kubeadm.go:319] 
	I0110 02:44:54.656000  190834 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 02:44:54.656061  190834 kubeadm.go:319] 	- The kubelet is not running
	I0110 02:44:54.656378  190834 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 02:44:54.656386  190834 kubeadm.go:319] 
	I0110 02:44:54.656597  190834 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 02:44:54.656866  190834 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 02:44:54.656937  190834 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 02:44:54.656946  190834 kubeadm.go:319] 
	I0110 02:44:54.661702  190834 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:44:54.662211  190834 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:44:54.662360  190834 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:44:54.662749  190834 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 02:44:54.662765  190834 kubeadm.go:319] 
	I0110 02:44:54.662915  190834 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W0110 02:44:54.663023  190834 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-038359 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-038359 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001043276s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I0110 02:44:54.663140  190834 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0110 02:44:55.075851  190834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:44:55.089502  190834 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:44:55.089569  190834 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:44:55.098023  190834 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:44:55.098042  190834 kubeadm.go:158] found existing configuration files:
	
	I0110 02:44:55.098097  190834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:44:55.106510  190834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:44:55.106629  190834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:44:55.114942  190834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:44:55.123295  190834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:44:55.123364  190834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:44:55.133325  190834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:44:55.141513  190834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:44:55.141578  190834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:44:55.150542  190834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:44:55.160255  190834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:44:55.160330  190834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:44:55.168655  190834 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:44:55.214853  190834 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:44:55.214919  190834 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:44:55.294435  190834 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:44:55.294511  190834 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:44:55.294551  190834 kubeadm.go:319] OS: Linux
	I0110 02:44:55.294601  190834 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:44:55.294652  190834 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:44:55.294703  190834 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:44:55.294755  190834 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:44:55.294805  190834 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:44:55.294860  190834 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:44:55.294912  190834 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:44:55.294963  190834 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:44:55.295013  190834 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:44:55.365460  190834 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:44:55.365574  190834 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:44:55.365671  190834 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:44:55.373470  190834 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W0110 02:44:53.257249  204755 node_ready.go:57] node "embed-certs-290628" has "Ready":"False" status (will retry)
	W0110 02:44:55.757810  204755 node_ready.go:57] node "embed-certs-290628" has "Ready":"False" status (will retry)
	I0110 02:44:55.376963  190834 out.go:252]   - Generating certificates and keys ...
	I0110 02:44:55.377057  190834 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:44:55.377127  190834 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:44:55.377207  190834 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0110 02:44:55.377272  190834 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I0110 02:44:55.377346  190834 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I0110 02:44:55.377582  190834 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I0110 02:44:55.377663  190834 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I0110 02:44:55.377995  190834 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I0110 02:44:55.378428  190834 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0110 02:44:55.379000  190834 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0110 02:44:55.379288  190834 kubeadm.go:319] [certs] Using the existing "sa" key
	I0110 02:44:55.379356  190834 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:44:55.765220  190834 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:44:55.912388  190834 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:44:56.016742  190834 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:44:56.132433  190834 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:44:56.887057  190834 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:44:56.887604  190834 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:44:56.890068  190834 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:44:56.268893  204755 node_ready.go:49] node "embed-certs-290628" is "Ready"
	I0110 02:44:56.268918  204755 node_ready.go:38] duration metric: took 12.015118187s for node "embed-certs-290628" to be "Ready" ...
	I0110 02:44:56.268931  204755 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:44:56.268989  204755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:44:56.292834  204755 api_server.go:72] duration metric: took 13.209713199s to wait for apiserver process to appear ...
	I0110 02:44:56.292862  204755 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:44:56.292885  204755 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:44:56.311286  204755 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:44:56.312475  204755 api_server.go:141] control plane version: v1.35.0
	I0110 02:44:56.312500  204755 api_server.go:131] duration metric: took 19.631421ms to wait for apiserver health ...
	I0110 02:44:56.312509  204755 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:44:56.317859  204755 system_pods.go:59] 8 kube-system pods found
	I0110 02:44:56.317900  204755 system_pods.go:61] "coredns-7d764666f9-jwjfn" [858d8d57-f89b-4d9b-8aa5-dbf6572c266d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:44:56.317910  204755 system_pods.go:61] "etcd-embed-certs-290628" [7905e553-9138-4822-9649-ee6f3e2eb58d] Running
	I0110 02:44:56.317916  204755 system_pods.go:61] "kindnet-g87jl" [be4cb760-ae30-4fd4-a122-e8e7f957939c] Running
	I0110 02:44:56.317921  204755 system_pods.go:61] "kube-apiserver-embed-certs-290628" [2e5da56c-c5c4-44b7-a1fd-36b46bd328d3] Running
	I0110 02:44:56.317926  204755 system_pods.go:61] "kube-controller-manager-embed-certs-290628" [186646b5-8524-41e4-9566-6466f756d103] Running
	I0110 02:44:56.317931  204755 system_pods.go:61] "kube-proxy-bdvjd" [af84947c-dfb8-478d-a250-f573f1edd0d7] Running
	I0110 02:44:56.317939  204755 system_pods.go:61] "kube-scheduler-embed-certs-290628" [699bf674-d25c-4ebc-8db6-5d58a627de92] Running
	I0110 02:44:56.317943  204755 system_pods.go:61] "storage-provisioner" [d2832396-2583-470a-a396-7e8bb76186de] Pending
	I0110 02:44:56.317957  204755 system_pods.go:74] duration metric: took 5.442542ms to wait for pod list to return data ...
	I0110 02:44:56.317965  204755 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:44:56.321132  204755 default_sa.go:45] found service account: "default"
	I0110 02:44:56.321156  204755 default_sa.go:55] duration metric: took 3.186014ms for default service account to be created ...
	I0110 02:44:56.321166  204755 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:44:56.323686  204755 system_pods.go:86] 8 kube-system pods found
	I0110 02:44:56.323715  204755 system_pods.go:89] "coredns-7d764666f9-jwjfn" [858d8d57-f89b-4d9b-8aa5-dbf6572c266d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:44:56.323722  204755 system_pods.go:89] "etcd-embed-certs-290628" [7905e553-9138-4822-9649-ee6f3e2eb58d] Running
	I0110 02:44:56.323728  204755 system_pods.go:89] "kindnet-g87jl" [be4cb760-ae30-4fd4-a122-e8e7f957939c] Running
	I0110 02:44:56.323733  204755 system_pods.go:89] "kube-apiserver-embed-certs-290628" [2e5da56c-c5c4-44b7-a1fd-36b46bd328d3] Running
	I0110 02:44:56.323739  204755 system_pods.go:89] "kube-controller-manager-embed-certs-290628" [186646b5-8524-41e4-9566-6466f756d103] Running
	I0110 02:44:56.323743  204755 system_pods.go:89] "kube-proxy-bdvjd" [af84947c-dfb8-478d-a250-f573f1edd0d7] Running
	I0110 02:44:56.323777  204755 system_pods.go:89] "kube-scheduler-embed-certs-290628" [699bf674-d25c-4ebc-8db6-5d58a627de92] Running
	I0110 02:44:56.323810  204755 system_pods.go:89] "storage-provisioner" [d2832396-2583-470a-a396-7e8bb76186de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:44:56.323835  204755 retry.go:84] will retry after 300ms: missing components: kube-dns
	I0110 02:44:56.591259  204755 system_pods.go:86] 8 kube-system pods found
	I0110 02:44:56.591297  204755 system_pods.go:89] "coredns-7d764666f9-jwjfn" [858d8d57-f89b-4d9b-8aa5-dbf6572c266d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:44:56.591305  204755 system_pods.go:89] "etcd-embed-certs-290628" [7905e553-9138-4822-9649-ee6f3e2eb58d] Running
	I0110 02:44:56.591311  204755 system_pods.go:89] "kindnet-g87jl" [be4cb760-ae30-4fd4-a122-e8e7f957939c] Running
	I0110 02:44:56.591318  204755 system_pods.go:89] "kube-apiserver-embed-certs-290628" [2e5da56c-c5c4-44b7-a1fd-36b46bd328d3] Running
	I0110 02:44:56.591324  204755 system_pods.go:89] "kube-controller-manager-embed-certs-290628" [186646b5-8524-41e4-9566-6466f756d103] Running
	I0110 02:44:56.591328  204755 system_pods.go:89] "kube-proxy-bdvjd" [af84947c-dfb8-478d-a250-f573f1edd0d7] Running
	I0110 02:44:56.591341  204755 system_pods.go:89] "kube-scheduler-embed-certs-290628" [699bf674-d25c-4ebc-8db6-5d58a627de92] Running
	I0110 02:44:56.591355  204755 system_pods.go:89] "storage-provisioner" [d2832396-2583-470a-a396-7e8bb76186de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:44:56.866206  204755 system_pods.go:86] 8 kube-system pods found
	I0110 02:44:56.866248  204755 system_pods.go:89] "coredns-7d764666f9-jwjfn" [858d8d57-f89b-4d9b-8aa5-dbf6572c266d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:44:56.866256  204755 system_pods.go:89] "etcd-embed-certs-290628" [7905e553-9138-4822-9649-ee6f3e2eb58d] Running
	I0110 02:44:56.866262  204755 system_pods.go:89] "kindnet-g87jl" [be4cb760-ae30-4fd4-a122-e8e7f957939c] Running
	I0110 02:44:56.866267  204755 system_pods.go:89] "kube-apiserver-embed-certs-290628" [2e5da56c-c5c4-44b7-a1fd-36b46bd328d3] Running
	I0110 02:44:56.866273  204755 system_pods.go:89] "kube-controller-manager-embed-certs-290628" [186646b5-8524-41e4-9566-6466f756d103] Running
	I0110 02:44:56.866279  204755 system_pods.go:89] "kube-proxy-bdvjd" [af84947c-dfb8-478d-a250-f573f1edd0d7] Running
	I0110 02:44:56.866283  204755 system_pods.go:89] "kube-scheduler-embed-certs-290628" [699bf674-d25c-4ebc-8db6-5d58a627de92] Running
	I0110 02:44:56.866294  204755 system_pods.go:89] "storage-provisioner" [d2832396-2583-470a-a396-7e8bb76186de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:44:57.249283  204755 system_pods.go:86] 8 kube-system pods found
	I0110 02:44:57.249328  204755 system_pods.go:89] "coredns-7d764666f9-jwjfn" [858d8d57-f89b-4d9b-8aa5-dbf6572c266d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:44:57.249335  204755 system_pods.go:89] "etcd-embed-certs-290628" [7905e553-9138-4822-9649-ee6f3e2eb58d] Running
	I0110 02:44:57.249342  204755 system_pods.go:89] "kindnet-g87jl" [be4cb760-ae30-4fd4-a122-e8e7f957939c] Running
	I0110 02:44:57.249347  204755 system_pods.go:89] "kube-apiserver-embed-certs-290628" [2e5da56c-c5c4-44b7-a1fd-36b46bd328d3] Running
	I0110 02:44:57.249352  204755 system_pods.go:89] "kube-controller-manager-embed-certs-290628" [186646b5-8524-41e4-9566-6466f756d103] Running
	I0110 02:44:57.249356  204755 system_pods.go:89] "kube-proxy-bdvjd" [af84947c-dfb8-478d-a250-f573f1edd0d7] Running
	I0110 02:44:57.249361  204755 system_pods.go:89] "kube-scheduler-embed-certs-290628" [699bf674-d25c-4ebc-8db6-5d58a627de92] Running
	I0110 02:44:57.249368  204755 system_pods.go:89] "storage-provisioner" [d2832396-2583-470a-a396-7e8bb76186de] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:44:57.780607  204755 system_pods.go:86] 8 kube-system pods found
	I0110 02:44:57.780639  204755 system_pods.go:89] "coredns-7d764666f9-jwjfn" [858d8d57-f89b-4d9b-8aa5-dbf6572c266d] Running
	I0110 02:44:57.780647  204755 system_pods.go:89] "etcd-embed-certs-290628" [7905e553-9138-4822-9649-ee6f3e2eb58d] Running
	I0110 02:44:57.780652  204755 system_pods.go:89] "kindnet-g87jl" [be4cb760-ae30-4fd4-a122-e8e7f957939c] Running
	I0110 02:44:57.780656  204755 system_pods.go:89] "kube-apiserver-embed-certs-290628" [2e5da56c-c5c4-44b7-a1fd-36b46bd328d3] Running
	I0110 02:44:57.780663  204755 system_pods.go:89] "kube-controller-manager-embed-certs-290628" [186646b5-8524-41e4-9566-6466f756d103] Running
	I0110 02:44:57.780669  204755 system_pods.go:89] "kube-proxy-bdvjd" [af84947c-dfb8-478d-a250-f573f1edd0d7] Running
	I0110 02:44:57.780674  204755 system_pods.go:89] "kube-scheduler-embed-certs-290628" [699bf674-d25c-4ebc-8db6-5d58a627de92] Running
	I0110 02:44:57.780679  204755 system_pods.go:89] "storage-provisioner" [d2832396-2583-470a-a396-7e8bb76186de] Running
	I0110 02:44:57.780692  204755 system_pods.go:126] duration metric: took 1.459519876s to wait for k8s-apps to be running ...
	I0110 02:44:57.780703  204755 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:44:57.780757  204755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:44:57.792954  204755 system_svc.go:56] duration metric: took 12.241872ms WaitForService to wait for kubelet
	I0110 02:44:57.792987  204755 kubeadm.go:587] duration metric: took 14.709870445s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:44:57.793006  204755 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:44:57.796025  204755 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:44:57.796057  204755 node_conditions.go:123] node cpu capacity is 2
	I0110 02:44:57.796071  204755 node_conditions.go:105] duration metric: took 3.059174ms to run NodePressure ...
	I0110 02:44:57.796085  204755 start.go:242] waiting for startup goroutines ...
	I0110 02:44:57.796092  204755 start.go:247] waiting for cluster config update ...
	I0110 02:44:57.796105  204755 start.go:256] writing updated cluster config ...
	I0110 02:44:57.796372  204755 ssh_runner.go:195] Run: rm -f paused
	I0110 02:44:57.802592  204755 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:44:57.881057  204755 pod_ready.go:83] waiting for pod "coredns-7d764666f9-jwjfn" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:44:57.885624  204755 pod_ready.go:94] pod "coredns-7d764666f9-jwjfn" is "Ready"
	I0110 02:44:57.885653  204755 pod_ready.go:86] duration metric: took 4.567769ms for pod "coredns-7d764666f9-jwjfn" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:44:57.887748  204755 pod_ready.go:83] waiting for pod "etcd-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:44:57.891585  204755 pod_ready.go:94] pod "etcd-embed-certs-290628" is "Ready"
	I0110 02:44:57.891610  204755 pod_ready.go:86] duration metric: took 3.839206ms for pod "etcd-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:44:57.893677  204755 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:44:57.897736  204755 pod_ready.go:94] pod "kube-apiserver-embed-certs-290628" is "Ready"
	I0110 02:44:57.897757  204755 pod_ready.go:86] duration metric: took 4.057522ms for pod "kube-apiserver-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:44:57.899721  204755 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:44:58.206941  204755 pod_ready.go:94] pod "kube-controller-manager-embed-certs-290628" is "Ready"
	I0110 02:44:58.206972  204755 pod_ready.go:86] duration metric: took 307.229337ms for pod "kube-controller-manager-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:44:58.406907  204755 pod_ready.go:83] waiting for pod "kube-proxy-bdvjd" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:44:58.806890  204755 pod_ready.go:94] pod "kube-proxy-bdvjd" is "Ready"
	I0110 02:44:58.806921  204755 pod_ready.go:86] duration metric: took 399.987661ms for pod "kube-proxy-bdvjd" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:44:59.007735  204755 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:44:59.406546  204755 pod_ready.go:94] pod "kube-scheduler-embed-certs-290628" is "Ready"
	I0110 02:44:59.406576  204755 pod_ready.go:86] duration metric: took 398.803679ms for pod "kube-scheduler-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:44:59.406590  204755 pod_ready.go:40] duration metric: took 1.603966075s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:44:59.472699  204755 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 02:44:59.475918  204755 out.go:203] 
	W0110 02:44:59.478767  204755 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 02:44:59.481639  204755 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:44:59.485396  204755 out.go:179] * Done! kubectl is now configured to use "embed-certs-290628" cluster and "default" namespace by default
	I0110 02:44:56.893462  190834 out.go:252]   - Booting up control plane ...
	I0110 02:44:56.893565  190834 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:44:56.893646  190834 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:44:56.893713  190834 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:44:56.909248  190834 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:44:56.909364  190834 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:44:56.917125  190834 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:44:56.917453  190834 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:44:56.917499  190834 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:44:57.059884  190834 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:44:57.060002  190834 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Jan 10 02:44:56 embed-certs-290628 crio[836]: time="2026-01-10T02:44:56.712011038Z" level=info msg="Created container 21ed7507bc5ae00c33fca674922069f8cd908b73d7719ee1e88f163aa83ba1b7: kube-system/coredns-7d764666f9-jwjfn/coredns" id=27324e98-4428-4454-958e-035cc3ec2384 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:44:56 embed-certs-290628 crio[836]: time="2026-01-10T02:44:56.712993756Z" level=info msg="Starting container: 21ed7507bc5ae00c33fca674922069f8cd908b73d7719ee1e88f163aa83ba1b7" id=4f278bc1-afb6-4253-83cc-b969edbb65cc name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:44:56 embed-certs-290628 crio[836]: time="2026-01-10T02:44:56.724793359Z" level=info msg="Started container" PID=1790 containerID=21ed7507bc5ae00c33fca674922069f8cd908b73d7719ee1e88f163aa83ba1b7 description=kube-system/coredns-7d764666f9-jwjfn/coredns id=4f278bc1-afb6-4253-83cc-b969edbb65cc name=/runtime.v1.RuntimeService/StartContainer sandboxID=f29304a24a53ba7ca7f865d45d3fa8ccf3bc35b42d0b610d695cb4cb1bc7f5ed
	Jan 10 02:44:59 embed-certs-290628 crio[836]: time="2026-01-10T02:44:59.982833812Z" level=info msg="Running pod sandbox: default/busybox/POD" id=e6b238c7-a742-4345-9c8e-5e37aab1975f name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:44:59 embed-certs-290628 crio[836]: time="2026-01-10T02:44:59.982937326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:44:59 embed-certs-290628 crio[836]: time="2026-01-10T02:44:59.988472107Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:75d350baa75543dad8a8bf98adcc6592f859d5e7c44b80fb1560e48dc822d2c5 UID:5b8f46d9-c5f8-4d7b-a581-b98bf5d92055 NetNS:/var/run/netns/13a2c3ff-4739-4d70-b254-49f466d32615 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400263c188}] Aliases:map[]}"
	Jan 10 02:44:59 embed-certs-290628 crio[836]: time="2026-01-10T02:44:59.988521919Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 10 02:45:00 embed-certs-290628 crio[836]: time="2026-01-10T02:45:00.003175935Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:75d350baa75543dad8a8bf98adcc6592f859d5e7c44b80fb1560e48dc822d2c5 UID:5b8f46d9-c5f8-4d7b-a581-b98bf5d92055 NetNS:/var/run/netns/13a2c3ff-4739-4d70-b254-49f466d32615 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400263c188}] Aliases:map[]}"
	Jan 10 02:45:00 embed-certs-290628 crio[836]: time="2026-01-10T02:45:00.003335857Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 10 02:45:00 embed-certs-290628 crio[836]: time="2026-01-10T02:45:00.00819604Z" level=info msg="Ran pod sandbox 75d350baa75543dad8a8bf98adcc6592f859d5e7c44b80fb1560e48dc822d2c5 with infra container: default/busybox/POD" id=e6b238c7-a742-4345-9c8e-5e37aab1975f name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:45:00 embed-certs-290628 crio[836]: time="2026-01-10T02:45:00.012918093Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fd46c0de-5c58-4019-8106-880f06597751 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:45:00 embed-certs-290628 crio[836]: time="2026-01-10T02:45:00.013302197Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=fd46c0de-5c58-4019-8106-880f06597751 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:45:00 embed-certs-290628 crio[836]: time="2026-01-10T02:45:00.013512694Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=fd46c0de-5c58-4019-8106-880f06597751 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:45:00 embed-certs-290628 crio[836]: time="2026-01-10T02:45:00.019655812Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9ec5f2a5-0298-48ad-b7a9-6645081658ef name=/runtime.v1.ImageService/PullImage
	Jan 10 02:45:00 embed-certs-290628 crio[836]: time="2026-01-10T02:45:00.020352712Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 10 02:45:02 embed-certs-290628 crio[836]: time="2026-01-10T02:45:02.155086202Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=9ec5f2a5-0298-48ad-b7a9-6645081658ef name=/runtime.v1.ImageService/PullImage
	Jan 10 02:45:02 embed-certs-290628 crio[836]: time="2026-01-10T02:45:02.155684881Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=95076cfa-ce9d-4e50-9eb5-19a740ef9628 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:45:02 embed-certs-290628 crio[836]: time="2026-01-10T02:45:02.157877042Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bfcbe778-8ce3-4f6d-9dc6-b5bb973df26c name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:45:02 embed-certs-290628 crio[836]: time="2026-01-10T02:45:02.163287684Z" level=info msg="Creating container: default/busybox/busybox" id=de61ab80-0b54-4c64-9f01-cb1a76e6b9cb name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:45:02 embed-certs-290628 crio[836]: time="2026-01-10T02:45:02.163422236Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:45:02 embed-certs-290628 crio[836]: time="2026-01-10T02:45:02.168239434Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:45:02 embed-certs-290628 crio[836]: time="2026-01-10T02:45:02.16871732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:45:02 embed-certs-290628 crio[836]: time="2026-01-10T02:45:02.18377176Z" level=info msg="Created container ae16663f19453bbd7b0232c95c222b5ea85ae42ecf2db652b82fce35ea55ee6c: default/busybox/busybox" id=de61ab80-0b54-4c64-9f01-cb1a76e6b9cb name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:45:02 embed-certs-290628 crio[836]: time="2026-01-10T02:45:02.18637968Z" level=info msg="Starting container: ae16663f19453bbd7b0232c95c222b5ea85ae42ecf2db652b82fce35ea55ee6c" id=0aaccc0c-c220-41ae-8605-32a79765685f name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:45:02 embed-certs-290628 crio[836]: time="2026-01-10T02:45:02.189462935Z" level=info msg="Started container" PID=1849 containerID=ae16663f19453bbd7b0232c95c222b5ea85ae42ecf2db652b82fce35ea55ee6c description=default/busybox/busybox id=0aaccc0c-c220-41ae-8605-32a79765685f name=/runtime.v1.RuntimeService/StartContainer sandboxID=75d350baa75543dad8a8bf98adcc6592f859d5e7c44b80fb1560e48dc822d2c5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	ae16663f19453       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   75d350baa7554       busybox                                      default
	21ed7507bc5ae       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      13 seconds ago      Running             coredns                   0                   f29304a24a53b       coredns-7d764666f9-jwjfn                     kube-system
	59eebd7a684c6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   3c2ca8b37078a       storage-provisioner                          kube-system
	e5a8bdda9cc2a       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    24 seconds ago      Running             kindnet-cni               0                   99e37b95f8f36       kindnet-g87jl                                kube-system
	00c2fd45d1c8a       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      27 seconds ago      Running             kube-proxy                0                   2517a9bdf265e       kube-proxy-bdvjd                             kube-system
	765f28ff1251d       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      38 seconds ago      Running             kube-scheduler            0                   787cdc8474d3f       kube-scheduler-embed-certs-290628            kube-system
	041a75c242622       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      38 seconds ago      Running             etcd                      0                   030e13beff8c7       etcd-embed-certs-290628                      kube-system
	8a89f2189e2d2       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      38 seconds ago      Running             kube-apiserver            0                   fd6d47d8a4958       kube-apiserver-embed-certs-290628            kube-system
	65931d9f9994b       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      38 seconds ago      Running             kube-controller-manager   0                   fb05e05e9bee0       kube-controller-manager-embed-certs-290628   kube-system
	
	
	==> coredns [21ed7507bc5ae00c33fca674922069f8cd908b73d7719ee1e88f163aa83ba1b7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:45069 - 65100 "HINFO IN 8993076596162766037.8259516466458955907. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010787676s
	
	
	==> describe nodes <==
	Name:               embed-certs-290628
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-290628
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=embed-certs-290628
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_44_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:44:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-290628
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:45:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:45:08 +0000   Sat, 10 Jan 2026 02:44:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:45:08 +0000   Sat, 10 Jan 2026 02:44:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:45:08 +0000   Sat, 10 Jan 2026 02:44:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:45:08 +0000   Sat, 10 Jan 2026 02:44:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-290628
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                d1493119-98e9-4ef8-b2ce-67a3672d1963
	  Boot ID:                    41f52236-76b9-4dc8-bfe3-121fbdeb9659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-jwjfn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-embed-certs-290628                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-g87jl                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-embed-certs-290628             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-embed-certs-290628    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-bdvjd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-embed-certs-290628             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  29s   node-controller  Node embed-certs-290628 event: Registered Node embed-certs-290628 in Controller
	
	
	==> dmesg <==
	[Jan10 02:09] overlayfs: idmapped layers are currently not supported
	[Jan10 02:10] overlayfs: idmapped layers are currently not supported
	[Jan10 02:14] overlayfs: idmapped layers are currently not supported
	[ +27.765975] overlayfs: idmapped layers are currently not supported
	[Jan10 02:15] overlayfs: idmapped layers are currently not supported
	[Jan10 02:16] overlayfs: idmapped layers are currently not supported
	[Jan10 02:17] overlayfs: idmapped layers are currently not supported
	[Jan10 02:18] overlayfs: idmapped layers are currently not supported
	[ +23.212409] overlayfs: idmapped layers are currently not supported
	[  +9.866169] overlayfs: idmapped layers are currently not supported
	[Jan10 02:19] overlayfs: idmapped layers are currently not supported
	[Jan10 02:20] overlayfs: idmapped layers are currently not supported
	[ +35.960240] overlayfs: idmapped layers are currently not supported
	[Jan10 02:21] overlayfs: idmapped layers are currently not supported
	[Jan10 02:23] overlayfs: idmapped layers are currently not supported
	[Jan10 02:24] overlayfs: idmapped layers are currently not supported
	[Jan10 02:25] overlayfs: idmapped layers are currently not supported
	[Jan10 02:30] overlayfs: idmapped layers are currently not supported
	[Jan10 02:31] overlayfs: idmapped layers are currently not supported
	[Jan10 02:35] overlayfs: idmapped layers are currently not supported
	[Jan10 02:37] overlayfs: idmapped layers are currently not supported
	[Jan10 02:41] overlayfs: idmapped layers are currently not supported
	[ +37.534021] overlayfs: idmapped layers are currently not supported
	[Jan10 02:43] overlayfs: idmapped layers are currently not supported
	[Jan10 02:44] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [041a75c242622c49ee21052e2ee08016ea121647f2bffdc454401785daa6db3e] <==
	{"level":"info","ts":"2026-01-10T02:44:32.228174Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:44:32.693870Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T02:44:32.694011Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T02:44:32.694083Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2026-01-10T02:44:32.694163Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:44:32.694205Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:44:32.696920Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:44:32.697054Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:44:32.697107Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2026-01-10T02:44:32.697151Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:44:32.700160Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:44:32.700354Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-290628 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:44:32.700440Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:44:32.722307Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:44:32.731567Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:44:32.733982Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T02:44:32.734368Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:44:32.734995Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:44:32.744728Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:44:32.774930Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:44:32.774876Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:44:32.775106Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:44:32.782770Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:44:32.782847Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T02:44:32.782907Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	
	
	==> kernel <==
	 02:45:10 up  1:27,  0 user,  load average: 1.60, 1.77, 1.80
	Linux embed-certs-290628 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e5a8bdda9cc2a50f862da949ec9efb98c53efa56ad6682e838ca1fedbf97dd7b] <==
	I0110 02:44:45.626682       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:44:45.627095       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 02:44:45.627221       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:44:45.627238       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:44:45.627252       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:44:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:44:45.828307       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:44:45.828334       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:44:45.828343       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:44:45.828777       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:44:46.128618       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:44:46.128643       1 metrics.go:72] Registering metrics
	I0110 02:44:46.128694       1 controller.go:711] "Syncing nftables rules"
	I0110 02:44:55.827885       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:44:55.827938       1 main.go:301] handling current node
	I0110 02:45:05.830655       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:45:05.830696       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8a89f2189e2d25bc3a7fd25db8684cc8e492f8a5f2a779a5f5e6c06bb66d6fb0] <==
	I0110 02:44:34.679853       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0110 02:44:34.679883       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I0110 02:44:34.685029       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 02:44:34.689087       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:44:34.695496       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:44:34.695747       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:44:35.473364       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 02:44:35.480849       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 02:44:35.480872       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:44:36.253281       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:44:36.311075       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:44:36.380221       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 02:44:36.387521       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0110 02:44:36.388715       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:44:36.393726       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:44:36.589398       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:44:37.413627       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:44:37.430849       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 02:44:37.453790       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 02:44:42.045917       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:44:42.345296       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0110 02:44:42.345296       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0110 02:44:42.451753       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:44:42.460671       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E0110 02:45:08.816701       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:46064: use of closed network connection
	
	
	==> kube-controller-manager [65931d9f9994b01d0e5188631ce02327e20835814334998c7510fd6f1bbb79f3] <==
	I0110 02:44:41.424355       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.424383       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.424391       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.424398       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.424419       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.424433       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.419545       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.419572       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.425018       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.425029       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.425040       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.425055       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.425062       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.425070       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.425084       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.418278       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.425094       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.425103       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.425143       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.444046       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.503024       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.517313       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:41.517347       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:44:41.517354       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:44:56.449060       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [00c2fd45d1c8af41a397399b14f76402be9a76ff80a704ddb4fd17432f7e876d] <==
	I0110 02:44:42.825053       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:44:42.929447       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:44:43.030160       1 shared_informer.go:377] "Caches are synced"
	I0110 02:44:43.030208       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 02:44:43.030307       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:44:43.059200       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:44:43.059314       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:44:43.063699       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:44:43.064069       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:44:43.064256       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:44:43.065475       1 config.go:200] "Starting service config controller"
	I0110 02:44:43.065669       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:44:43.065785       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:44:43.065820       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:44:43.065860       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:44:43.068802       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:44:43.069583       1 config.go:309] "Starting node config controller"
	I0110 02:44:43.069630       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:44:43.069660       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:44:43.166573       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 02:44:43.166616       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 02:44:43.169142       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [765f28ff1251dc39b4aa8b31f6d3c76d2aadea32b423aa44b8760cca37779305] <==
	E0110 02:44:34.665764       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 02:44:34.665833       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 02:44:34.665781       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 02:44:34.666124       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 02:44:34.666330       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 02:44:34.666070       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 02:44:34.666975       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 02:44:34.669427       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 02:44:34.670023       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 02:44:34.670061       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 02:44:35.502430       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 02:44:35.504917       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 02:44:35.523895       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 02:44:35.569403       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 02:44:35.569837       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 02:44:35.617425       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E0110 02:44:35.643986       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 02:44:35.687744       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 02:44:35.728838       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 02:44:35.796495       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 02:44:35.809056       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 02:44:35.875844       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 02:44:35.895929       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 02:44:35.954223       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	I0110 02:44:37.944957       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:44:42 embed-certs-290628 kubelet[1303]: I0110 02:44:42.609067    1303 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jan 10 02:44:42 embed-certs-290628 kubelet[1303]: E0110 02:44:42.625253    1303 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-290628" containerName="kube-scheduler"
	Jan 10 02:44:42 embed-certs-290628 kubelet[1303]: W0110 02:44:42.702843    1303 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8/crio-99e37b95f8f36673b15da62b46e671d993326b02707084d8f50dda83ece60951 WatchSource:0}: Error finding container 99e37b95f8f36673b15da62b46e671d993326b02707084d8f50dda83ece60951: Status 404 returned error can't find the container with id 99e37b95f8f36673b15da62b46e671d993326b02707084d8f50dda83ece60951
	Jan 10 02:44:43 embed-certs-290628 kubelet[1303]: I0110 02:44:43.561515    1303 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-bdvjd" podStartSLOduration=1.5614966300000002 podStartE2EDuration="1.56149663s" podCreationTimestamp="2026-01-10 02:44:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:44:43.558283319 +0000 UTC m=+6.256116262" watchObservedRunningTime="2026-01-10 02:44:43.56149663 +0000 UTC m=+6.259329564"
	Jan 10 02:44:44 embed-certs-290628 kubelet[1303]: E0110 02:44:44.097541    1303 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-290628" containerName="kube-controller-manager"
	Jan 10 02:44:47 embed-certs-290628 kubelet[1303]: E0110 02:44:47.961938    1303 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-290628" containerName="kube-apiserver"
	Jan 10 02:44:47 embed-certs-290628 kubelet[1303]: I0110 02:44:47.980939    1303 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-g87jl" podStartSLOduration=3.255717414 podStartE2EDuration="5.98092449s" podCreationTimestamp="2026-01-10 02:44:42 +0000 UTC" firstStartedPulling="2026-01-10 02:44:42.712006932 +0000 UTC m=+5.409839866" lastFinishedPulling="2026-01-10 02:44:45.437213983 +0000 UTC m=+8.135046942" observedRunningTime="2026-01-10 02:44:45.578840697 +0000 UTC m=+8.276673632" watchObservedRunningTime="2026-01-10 02:44:47.98092449 +0000 UTC m=+10.678757425"
	Jan 10 02:44:48 embed-certs-290628 kubelet[1303]: E0110 02:44:48.158568    1303 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-290628" containerName="etcd"
	Jan 10 02:44:48 embed-certs-290628 kubelet[1303]: E0110 02:44:48.560730    1303 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-290628" containerName="kube-apiserver"
	Jan 10 02:44:48 embed-certs-290628 kubelet[1303]: E0110 02:44:48.561023    1303 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-290628" containerName="etcd"
	Jan 10 02:44:52 embed-certs-290628 kubelet[1303]: E0110 02:44:52.633563    1303 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-290628" containerName="kube-scheduler"
	Jan 10 02:44:54 embed-certs-290628 kubelet[1303]: E0110 02:44:54.105938    1303 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-290628" containerName="kube-controller-manager"
	Jan 10 02:44:56 embed-certs-290628 kubelet[1303]: I0110 02:44:56.180238    1303 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 10 02:44:56 embed-certs-290628 kubelet[1303]: I0110 02:44:56.304120    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/858d8d57-f89b-4d9b-8aa5-dbf6572c266d-config-volume\") pod \"coredns-7d764666f9-jwjfn\" (UID: \"858d8d57-f89b-4d9b-8aa5-dbf6572c266d\") " pod="kube-system/coredns-7d764666f9-jwjfn"
	Jan 10 02:44:56 embed-certs-290628 kubelet[1303]: I0110 02:44:56.304184    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5blt\" (UniqueName: \"kubernetes.io/projected/858d8d57-f89b-4d9b-8aa5-dbf6572c266d-kube-api-access-k5blt\") pod \"coredns-7d764666f9-jwjfn\" (UID: \"858d8d57-f89b-4d9b-8aa5-dbf6572c266d\") " pod="kube-system/coredns-7d764666f9-jwjfn"
	Jan 10 02:44:56 embed-certs-290628 kubelet[1303]: I0110 02:44:56.304226    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d2832396-2583-470a-a396-7e8bb76186de-tmp\") pod \"storage-provisioner\" (UID: \"d2832396-2583-470a-a396-7e8bb76186de\") " pod="kube-system/storage-provisioner"
	Jan 10 02:44:56 embed-certs-290628 kubelet[1303]: I0110 02:44:56.304249    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fclrl\" (UniqueName: \"kubernetes.io/projected/d2832396-2583-470a-a396-7e8bb76186de-kube-api-access-fclrl\") pod \"storage-provisioner\" (UID: \"d2832396-2583-470a-a396-7e8bb76186de\") " pod="kube-system/storage-provisioner"
	Jan 10 02:44:56 embed-certs-290628 kubelet[1303]: W0110 02:44:56.576406    1303 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8/crio-3c2ca8b37078a0d647baccffcd056214d06b874bac3d37d35f013566c7551b0a WatchSource:0}: Error finding container 3c2ca8b37078a0d647baccffcd056214d06b874bac3d37d35f013566c7551b0a: Status 404 returned error can't find the container with id 3c2ca8b37078a0d647baccffcd056214d06b874bac3d37d35f013566c7551b0a
	Jan 10 02:44:57 embed-certs-290628 kubelet[1303]: E0110 02:44:57.584037    1303 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-jwjfn" containerName="coredns"
	Jan 10 02:44:57 embed-certs-290628 kubelet[1303]: I0110 02:44:57.595531    1303 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.595516468 podStartE2EDuration="13.595516468s" podCreationTimestamp="2026-01-10 02:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:44:57.594797899 +0000 UTC m=+20.292630858" watchObservedRunningTime="2026-01-10 02:44:57.595516468 +0000 UTC m=+20.293349419"
	Jan 10 02:44:58 embed-certs-290628 kubelet[1303]: E0110 02:44:58.585772    1303 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-jwjfn" containerName="coredns"
	Jan 10 02:44:59 embed-certs-290628 kubelet[1303]: E0110 02:44:59.587530    1303 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-jwjfn" containerName="coredns"
	Jan 10 02:44:59 embed-certs-290628 kubelet[1303]: I0110 02:44:59.673078    1303 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-jwjfn" podStartSLOduration=17.673062299 podStartE2EDuration="17.673062299s" podCreationTimestamp="2026-01-10 02:44:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:44:57.610096639 +0000 UTC m=+20.307929582" watchObservedRunningTime="2026-01-10 02:44:59.673062299 +0000 UTC m=+22.370895251"
	Jan 10 02:44:59 embed-certs-290628 kubelet[1303]: I0110 02:44:59.726480    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvqnf\" (UniqueName: \"kubernetes.io/projected/5b8f46d9-c5f8-4d7b-a581-b98bf5d92055-kube-api-access-dvqnf\") pod \"busybox\" (UID: \"5b8f46d9-c5f8-4d7b-a581-b98bf5d92055\") " pod="default/busybox"
	Jan 10 02:45:00 embed-certs-290628 kubelet[1303]: W0110 02:45:00.008106    1303 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8/crio-75d350baa75543dad8a8bf98adcc6592f859d5e7c44b80fb1560e48dc822d2c5 WatchSource:0}: Error finding container 75d350baa75543dad8a8bf98adcc6592f859d5e7c44b80fb1560e48dc822d2c5: Status 404 returned error can't find the container with id 75d350baa75543dad8a8bf98adcc6592f859d5e7c44b80fb1560e48dc822d2c5
	
	
	==> storage-provisioner [59eebd7a684c6ab62aac03d07df90a237a8ddf0301c24191203103f5f516c1b6] <==
	I0110 02:44:56.681541       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:44:56.722354       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:44:56.722481       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 02:44:56.728246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:44:56.843238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:44:56.843477       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:44:56.843667       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-290628_368950e3-80e6-4f43-8a1b-0e8ec85f1c04!
	I0110 02:44:56.845817       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"07064d36-5187-4459-b216-f8310ec76f12", APIVersion:"v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-290628_368950e3-80e6-4f43-8a1b-0e8ec85f1c04 became leader
	W0110 02:44:56.851656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:44:56.861918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:44:56.945955       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-290628_368950e3-80e6-4f43-8a1b-0e8ec85f1c04!
	W0110 02:44:58.865258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:44:58.871658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:45:00.875079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:45:00.883813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:45:02.886823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:45:02.893749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:45:04.896474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:45:04.900263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:45:06.903758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:45:06.910990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:45:08.913732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:45:08.922402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-290628 -n embed-certs-290628
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-290628 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-290628 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-290628 --alsologtostderr -v=1: exit status 80 (1.848385374s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-290628 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:46:29.726067  211486 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:46:29.726283  211486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:46:29.726310  211486 out.go:374] Setting ErrFile to fd 2...
	I0110 02:46:29.726329  211486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:46:29.726625  211486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:46:29.726903  211486 out.go:368] Setting JSON to false
	I0110 02:46:29.726945  211486 mustload.go:66] Loading cluster: embed-certs-290628
	I0110 02:46:29.727374  211486 config.go:182] Loaded profile config "embed-certs-290628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:46:29.727911  211486 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Status}}
	I0110 02:46:29.748520  211486 host.go:66] Checking if "embed-certs-290628" exists ...
	I0110 02:46:29.748846  211486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:46:29.810240  211486 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2026-01-10 02:46:29.80064245 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:46:29.810879  211486 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22414/minikube-v1.37.0-1767924026-22414-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767924026-22414/minikube-v1.37.0-1767924026-22414-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767924026-22414-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:embed-certs-290628 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(boo
l=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 02:46:29.816143  211486 out.go:179] * Pausing node embed-certs-290628 ... 
	I0110 02:46:29.818895  211486 host.go:66] Checking if "embed-certs-290628" exists ...
	I0110 02:46:29.819227  211486 ssh_runner.go:195] Run: systemctl --version
	I0110 02:46:29.819287  211486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:46:29.835306  211486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:46:29.941618  211486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:46:29.963429  211486 pause.go:52] kubelet running: true
	I0110 02:46:29.963519  211486 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:46:30.219240  211486 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:46:30.219334  211486 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:46:30.286065  211486 cri.go:96] found id: "0aae3cd379ca33876ea4a9b0c32378c10261c80e2ab472d4eee964f838bb8955"
	I0110 02:46:30.286088  211486 cri.go:96] found id: "5ad18c1213ce1ba79c8b25d06d31b054e5c6d7d41fb47e3deaf5b50002f70222"
	I0110 02:46:30.286093  211486 cri.go:96] found id: "b14c2926b4e747d67085d8e65c5115d40f6a48d7c026c413f61aeaca8a99c96e"
	I0110 02:46:30.286097  211486 cri.go:96] found id: "4e7d5f61e1851fb6bf16a2740f2dac2f735df2977fb762c6d77ec2fd39e8aa7b"
	I0110 02:46:30.286101  211486 cri.go:96] found id: "cf97d9a5c9164ffd276eb59e2c2d5a25f4e245b1724464838763e7794e90f36e"
	I0110 02:46:30.286104  211486 cri.go:96] found id: "35fda1be4302381d9f829358ffbd2635d0f9ef32fc82cec42ebc5bb2174d6351"
	I0110 02:46:30.286107  211486 cri.go:96] found id: "ddcfa0a7f69364becc7ff3f5e5d934a47a126035e79baf2f684410b0a13244b8"
	I0110 02:46:30.286111  211486 cri.go:96] found id: "5205ca6cd4481f5c3b3669c60cae1e88fadbf6a0dbed071dfc5ed887d68c1eb6"
	I0110 02:46:30.286114  211486 cri.go:96] found id: "95e8806653f7a62ea44d48caf19c96fe7f8f2f3713d980b9169516d588775029"
	I0110 02:46:30.286123  211486 cri.go:96] found id: "e2653d95359074ec9974d0190d972ff3ef51db7c891e922f11b26f760b02f6e3"
	I0110 02:46:30.286127  211486 cri.go:96] found id: "54cdd96263e164280da6cc533b71f17c8723cb4d97eb22286bfa92df9daa37aa"
	I0110 02:46:30.286130  211486 cri.go:96] found id: ""
	I0110 02:46:30.286181  211486 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:46:30.296825  211486 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:46:30Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:46:30.483213  211486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:46:30.496268  211486 pause.go:52] kubelet running: false
	I0110 02:46:30.496332  211486 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:46:30.666002  211486 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:46:30.666111  211486 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:46:30.730883  211486 cri.go:96] found id: "0aae3cd379ca33876ea4a9b0c32378c10261c80e2ab472d4eee964f838bb8955"
	I0110 02:46:30.730904  211486 cri.go:96] found id: "5ad18c1213ce1ba79c8b25d06d31b054e5c6d7d41fb47e3deaf5b50002f70222"
	I0110 02:46:30.730909  211486 cri.go:96] found id: "b14c2926b4e747d67085d8e65c5115d40f6a48d7c026c413f61aeaca8a99c96e"
	I0110 02:46:30.730912  211486 cri.go:96] found id: "4e7d5f61e1851fb6bf16a2740f2dac2f735df2977fb762c6d77ec2fd39e8aa7b"
	I0110 02:46:30.730920  211486 cri.go:96] found id: "cf97d9a5c9164ffd276eb59e2c2d5a25f4e245b1724464838763e7794e90f36e"
	I0110 02:46:30.730925  211486 cri.go:96] found id: "35fda1be4302381d9f829358ffbd2635d0f9ef32fc82cec42ebc5bb2174d6351"
	I0110 02:46:30.730928  211486 cri.go:96] found id: "ddcfa0a7f69364becc7ff3f5e5d934a47a126035e79baf2f684410b0a13244b8"
	I0110 02:46:30.730932  211486 cri.go:96] found id: "5205ca6cd4481f5c3b3669c60cae1e88fadbf6a0dbed071dfc5ed887d68c1eb6"
	I0110 02:46:30.730935  211486 cri.go:96] found id: "95e8806653f7a62ea44d48caf19c96fe7f8f2f3713d980b9169516d588775029"
	I0110 02:46:30.730941  211486 cri.go:96] found id: "e2653d95359074ec9974d0190d972ff3ef51db7c891e922f11b26f760b02f6e3"
	I0110 02:46:30.730945  211486 cri.go:96] found id: "54cdd96263e164280da6cc533b71f17c8723cb4d97eb22286bfa92df9daa37aa"
	I0110 02:46:30.730948  211486 cri.go:96] found id: ""
	I0110 02:46:30.730999  211486 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:46:31.223887  211486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:46:31.236898  211486 pause.go:52] kubelet running: false
	I0110 02:46:31.236983  211486 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:46:31.409582  211486 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:46:31.409728  211486 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:46:31.480901  211486 cri.go:96] found id: "0aae3cd379ca33876ea4a9b0c32378c10261c80e2ab472d4eee964f838bb8955"
	I0110 02:46:31.480924  211486 cri.go:96] found id: "5ad18c1213ce1ba79c8b25d06d31b054e5c6d7d41fb47e3deaf5b50002f70222"
	I0110 02:46:31.480929  211486 cri.go:96] found id: "b14c2926b4e747d67085d8e65c5115d40f6a48d7c026c413f61aeaca8a99c96e"
	I0110 02:46:31.480933  211486 cri.go:96] found id: "4e7d5f61e1851fb6bf16a2740f2dac2f735df2977fb762c6d77ec2fd39e8aa7b"
	I0110 02:46:31.480936  211486 cri.go:96] found id: "cf97d9a5c9164ffd276eb59e2c2d5a25f4e245b1724464838763e7794e90f36e"
	I0110 02:46:31.480940  211486 cri.go:96] found id: "35fda1be4302381d9f829358ffbd2635d0f9ef32fc82cec42ebc5bb2174d6351"
	I0110 02:46:31.480943  211486 cri.go:96] found id: "ddcfa0a7f69364becc7ff3f5e5d934a47a126035e79baf2f684410b0a13244b8"
	I0110 02:46:31.480953  211486 cri.go:96] found id: "5205ca6cd4481f5c3b3669c60cae1e88fadbf6a0dbed071dfc5ed887d68c1eb6"
	I0110 02:46:31.480957  211486 cri.go:96] found id: "95e8806653f7a62ea44d48caf19c96fe7f8f2f3713d980b9169516d588775029"
	I0110 02:46:31.480963  211486 cri.go:96] found id: "e2653d95359074ec9974d0190d972ff3ef51db7c891e922f11b26f760b02f6e3"
	I0110 02:46:31.480966  211486 cri.go:96] found id: "54cdd96263e164280da6cc533b71f17c8723cb4d97eb22286bfa92df9daa37aa"
	I0110 02:46:31.480969  211486 cri.go:96] found id: ""
	I0110 02:46:31.481016  211486 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:46:31.495385  211486 out.go:203] 
	W0110 02:46:31.498277  211486 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:46:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:46:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 02:46:31.498301  211486 out.go:285] * 
	* 
	W0110 02:46:31.501056  211486 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:46:31.503869  211486 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-290628 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-290628
helpers_test.go:244: (dbg) docker inspect embed-certs-290628:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8",
	        "Created": "2026-01-10T02:44:15.973564072Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 208836,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:45:23.645458131Z",
	            "FinishedAt": "2026-01-10T02:45:22.854933331Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8/hosts",
	        "LogPath": "/var/lib/docker/containers/23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8/23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8-json.log",
	        "Name": "/embed-certs-290628",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-290628:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-290628",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8",
	                "LowerDir": "/var/lib/docker/overlay2/ed28b0ecc2316eb191e9de699f8d778d307925e07bfe86b1703fc76d49ce1266-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ed28b0ecc2316eb191e9de699f8d778d307925e07bfe86b1703fc76d49ce1266/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ed28b0ecc2316eb191e9de699f8d778d307925e07bfe86b1703fc76d49ce1266/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ed28b0ecc2316eb191e9de699f8d778d307925e07bfe86b1703fc76d49ce1266/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-290628",
	                "Source": "/var/lib/docker/volumes/embed-certs-290628/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-290628",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-290628",
	                "name.minikube.sigs.k8s.io": "embed-certs-290628",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ea693768ce2a0097f8409c3b07da187b84cbfe2e126cf690d52ecf42dc7ea9d0",
	            "SandboxKey": "/var/run/docker/netns/ea693768ce2a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-290628": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:b8:e0:54:27:1f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ce2734d3261bfe1e9933e2600de541689c00d14ccf15e6611f60408ce1c3af3",
	                    "EndpointID": "fc15b62ebdf59e6ba1b7606af080b3020521f26512c3518130f7c26afff767e6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-290628",
	                        "23cbe0d69bf1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-290628 -n embed-certs-290628
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-290628 -n embed-certs-290628: exit status 2 (329.707289ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-290628 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-290628 logs -n 25: (1.255218704s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p cert-expiration-213257 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │ 10 Jan 26 02:37 UTC │
	│ start   │ -p cert-expiration-213257 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ delete  │ -p cert-expiration-213257                                                                                                                                                                                                                     │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ start   │ -p force-systemd-flag-038359 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-038359 │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │                     │
	│ delete  │ -p force-systemd-env-088457                                                                                                                                                                                                                   │ force-systemd-env-088457  │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ start   │ -p cert-options-295914 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:41 UTC │
	│ ssh     │ cert-options-295914 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ ssh     │ -p cert-options-295914 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ delete  │ -p cert-options-295914                                                                                                                                                                                                                        │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ start   │ -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:42 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-736081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │                     │
	│ stop    │ -p old-k8s-version-736081 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-736081 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:42 UTC │
	│ start   │ -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:43 UTC │
	│ image   │ old-k8s-version-736081 image list --format=json                                                                                                                                                                                               │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ pause   │ -p old-k8s-version-736081 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │                     │
	│ delete  │ -p old-k8s-version-736081                                                                                                                                                                                                                     │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ delete  │ -p old-k8s-version-736081                                                                                                                                                                                                                     │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ start   │ -p embed-certs-290628 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-290628        │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ addons  │ enable metrics-server -p embed-certs-290628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-290628        │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │                     │
	│ stop    │ -p embed-certs-290628 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-290628        │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:45 UTC │
	│ addons  │ enable dashboard -p embed-certs-290628 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-290628        │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:45 UTC │
	│ start   │ -p embed-certs-290628 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-290628        │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:46 UTC │
	│ image   │ embed-certs-290628 image list --format=json                                                                                                                                                                                                   │ embed-certs-290628        │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ pause   │ -p embed-certs-290628 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-290628        │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:45:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:45:23.381410  208704 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:45:23.381597  208704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:45:23.381624  208704 out.go:374] Setting ErrFile to fd 2...
	I0110 02:45:23.381648  208704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:45:23.381952  208704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:45:23.382417  208704 out.go:368] Setting JSON to false
	I0110 02:45:23.383287  208704 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5273,"bootTime":1768007851,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:45:23.383380  208704 start.go:143] virtualization:  
	I0110 02:45:23.386669  208704 out.go:179] * [embed-certs-290628] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:45:23.390561  208704 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:45:23.390641  208704 notify.go:221] Checking for updates...
	I0110 02:45:23.394415  208704 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:45:23.397350  208704 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:45:23.400205  208704 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:45:23.403114  208704 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:45:23.406025  208704 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:45:23.409416  208704 config.go:182] Loaded profile config "embed-certs-290628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:45:23.410022  208704 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:45:23.434456  208704 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:45:23.434555  208704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:45:23.497452  208704 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:45:23.488538143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:45:23.497573  208704 docker.go:319] overlay module found
	I0110 02:45:23.500815  208704 out.go:179] * Using the docker driver based on existing profile
	I0110 02:45:23.503661  208704 start.go:309] selected driver: docker
	I0110 02:45:23.503689  208704 start.go:928] validating driver "docker" against &{Name:embed-certs-290628 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-290628 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:45:23.503788  208704 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:45:23.504486  208704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:45:23.561052  208704 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:45:23.552226285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:45:23.561369  208704 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:45:23.561404  208704 cni.go:84] Creating CNI manager for ""
	I0110 02:45:23.561460  208704 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:45:23.561506  208704 start.go:353] cluster config:
	{Name:embed-certs-290628 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-290628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:45:23.564598  208704 out.go:179] * Starting "embed-certs-290628" primary control-plane node in "embed-certs-290628" cluster
	I0110 02:45:23.567211  208704 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:45:23.569972  208704 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:45:23.572715  208704 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:45:23.572758  208704 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 02:45:23.572769  208704 cache.go:65] Caching tarball of preloaded images
	I0110 02:45:23.572799  208704 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:45:23.572855  208704 preload.go:251] Found /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 02:45:23.572866  208704 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:45:23.572973  208704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/config.json ...
	I0110 02:45:23.591903  208704 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:45:23.591926  208704 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:45:23.591947  208704 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:45:23.591976  208704 start.go:360] acquireMachinesLock for embed-certs-290628: {Name:mkecc1830917e603b9fb1bffd9b396deb689a507 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:45:23.592033  208704 start.go:364] duration metric: took 36.586µs to acquireMachinesLock for "embed-certs-290628"
	I0110 02:45:23.592058  208704 start.go:96] Skipping create...Using existing machine configuration
	I0110 02:45:23.592066  208704 fix.go:54] fixHost starting: 
	I0110 02:45:23.592320  208704 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Status}}
	I0110 02:45:23.608911  208704 fix.go:112] recreateIfNeeded on embed-certs-290628: state=Stopped err=<nil>
	W0110 02:45:23.608941  208704 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 02:45:23.612063  208704 out.go:252] * Restarting existing docker container for "embed-certs-290628" ...
	I0110 02:45:23.612143  208704 cli_runner.go:164] Run: docker start embed-certs-290628
	I0110 02:45:23.878801  208704 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Status}}
	I0110 02:45:23.904662  208704 kic.go:430] container "embed-certs-290628" state is running.
	I0110 02:45:23.905026  208704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-290628
	I0110 02:45:23.927476  208704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/config.json ...
	I0110 02:45:23.927692  208704 machine.go:94] provisionDockerMachine start ...
	I0110 02:45:23.927762  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:23.946762  208704 main.go:144] libmachine: Using SSH client type: native
	I0110 02:45:23.947209  208704 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0110 02:45:23.947284  208704 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:45:23.949153  208704 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 02:45:27.095193  208704 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-290628
	
	I0110 02:45:27.095219  208704 ubuntu.go:182] provisioning hostname "embed-certs-290628"
	I0110 02:45:27.095285  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:27.113447  208704 main.go:144] libmachine: Using SSH client type: native
	I0110 02:45:27.113748  208704 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0110 02:45:27.113765  208704 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-290628 && echo "embed-certs-290628" | sudo tee /etc/hostname
	I0110 02:45:27.267775  208704 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-290628
	
	I0110 02:45:27.267891  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:27.285865  208704 main.go:144] libmachine: Using SSH client type: native
	I0110 02:45:27.286183  208704 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0110 02:45:27.286207  208704 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-290628' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-290628/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-290628' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:45:27.431932  208704 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:45:27.431962  208704 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 02:45:27.431980  208704 ubuntu.go:190] setting up certificates
	I0110 02:45:27.431989  208704 provision.go:84] configureAuth start
	I0110 02:45:27.432048  208704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-290628
	I0110 02:45:27.453438  208704 provision.go:143] copyHostCerts
	I0110 02:45:27.453508  208704 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem, removing ...
	I0110 02:45:27.453516  208704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:45:27.453591  208704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 02:45:27.453690  208704 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem, removing ...
	I0110 02:45:27.453696  208704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:45:27.453720  208704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 02:45:27.453771  208704 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem, removing ...
	I0110 02:45:27.453775  208704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:45:27.453798  208704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 02:45:27.453841  208704 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.embed-certs-290628 san=[127.0.0.1 192.168.76.2 embed-certs-290628 localhost minikube]
	I0110 02:45:27.510022  208704 provision.go:177] copyRemoteCerts
	I0110 02:45:27.510086  208704 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:45:27.510166  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:27.529229  208704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:45:27.636046  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:45:27.652825  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0110 02:45:27.669764  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:45:27.686693  208704 provision.go:87] duration metric: took 254.684325ms to configureAuth
	I0110 02:45:27.686720  208704 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:45:27.686911  208704 config.go:182] Loaded profile config "embed-certs-290628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:45:27.687060  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:27.704250  208704 main.go:144] libmachine: Using SSH client type: native
	I0110 02:45:27.704562  208704 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0110 02:45:27.704583  208704 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:45:28.054268  208704 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:45:28.054339  208704 machine.go:97] duration metric: took 4.126638028s to provisionDockerMachine
	I0110 02:45:28.054367  208704 start.go:293] postStartSetup for "embed-certs-290628" (driver="docker")
	I0110 02:45:28.054410  208704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:45:28.054498  208704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:45:28.054572  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:28.077456  208704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:45:28.186248  208704 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:45:28.190232  208704 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:45:28.190255  208704 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:45:28.190266  208704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:45:28.190318  208704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:45:28.190389  208704 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:45:28.190491  208704 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:45:28.200225  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:45:28.226621  208704 start.go:296] duration metric: took 172.211907ms for postStartSetup
	I0110 02:45:28.226714  208704 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:45:28.226781  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:28.246593  208704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:45:28.349043  208704 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:45:28.353809  208704 fix.go:56] duration metric: took 4.761736828s for fixHost
	I0110 02:45:28.353877  208704 start.go:83] releasing machines lock for "embed-certs-290628", held for 4.761829896s
	I0110 02:45:28.353953  208704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-290628
	I0110 02:45:28.370877  208704 ssh_runner.go:195] Run: cat /version.json
	I0110 02:45:28.370932  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:28.371194  208704 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:45:28.371246  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:28.395815  208704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:45:28.398432  208704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:45:28.605853  208704 ssh_runner.go:195] Run: systemctl --version
	I0110 02:45:28.612103  208704 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:45:28.644933  208704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:45:28.649084  208704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:45:28.649158  208704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:45:28.656627  208704 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:45:28.656649  208704 start.go:496] detecting cgroup driver to use...
	I0110 02:45:28.656706  208704 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:45:28.656766  208704 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:45:28.671582  208704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:45:28.684400  208704 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:45:28.684462  208704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:45:28.699538  208704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:45:28.712172  208704 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:45:28.829218  208704 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:45:28.947730  208704 docker.go:234] disabling docker service ...
	I0110 02:45:28.947860  208704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:45:28.962478  208704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:45:28.977737  208704 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:45:29.091632  208704 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:45:29.208134  208704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:45:29.220570  208704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:45:29.234330  208704 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:45:29.234395  208704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:45:29.242754  208704 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:45:29.242825  208704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:45:29.251692  208704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:45:29.260571  208704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:45:29.269216  208704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:45:29.277572  208704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:45:29.286412  208704 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:45:29.295240  208704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:45:29.303834  208704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:45:29.311061  208704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:45:29.318373  208704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:45:29.426682  208704 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:45:29.586651  208704 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:45:29.586730  208704 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:45:29.590506  208704 start.go:574] Will wait 60s for crictl version
	I0110 02:45:29.590613  208704 ssh_runner.go:195] Run: which crictl
	I0110 02:45:29.594010  208704 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:45:29.618465  208704 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:45:29.618622  208704 ssh_runner.go:195] Run: crio --version
	I0110 02:45:29.644106  208704 ssh_runner.go:195] Run: crio --version
	I0110 02:45:29.677232  208704 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:45:29.680124  208704 cli_runner.go:164] Run: docker network inspect embed-certs-290628 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:45:29.698469  208704 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:45:29.702667  208704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:45:29.711719  208704 kubeadm.go:884] updating cluster {Name:embed-certs-290628 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-290628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:45:29.711933  208704 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:45:29.711986  208704 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:45:29.749560  208704 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:45:29.749582  208704 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:45:29.749641  208704 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:45:29.775093  208704 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:45:29.775115  208704 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:45:29.775124  208704 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 02:45:29.775256  208704 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-290628 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-290628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:45:29.775336  208704 ssh_runner.go:195] Run: crio config
	I0110 02:45:29.843086  208704 cni.go:84] Creating CNI manager for ""
	I0110 02:45:29.843153  208704 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:45:29.843203  208704 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:45:29.843244  208704 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-290628 NodeName:embed-certs-290628 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:45:29.843453  208704 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-290628"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:45:29.843571  208704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:45:29.851951  208704 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:45:29.852059  208704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:45:29.860227  208704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0110 02:45:29.872996  208704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:45:29.884962  208704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I0110 02:45:29.896753  208704 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:45:29.900204  208704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:45:29.909553  208704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:45:30.033320  208704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:45:30.051377  208704 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628 for IP: 192.168.76.2
	I0110 02:45:30.051397  208704 certs.go:195] generating shared ca certs ...
	I0110 02:45:30.051414  208704 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:45:30.051570  208704 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:45:30.051612  208704 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:45:30.051620  208704 certs.go:257] generating profile certs ...
	I0110 02:45:30.051711  208704 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/client.key
	I0110 02:45:30.051785  208704 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/apiserver.key.4427bfdd
	I0110 02:45:30.051867  208704 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/proxy-client.key
	I0110 02:45:30.051987  208704 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:45:30.052024  208704 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:45:30.052032  208704 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:45:30.052058  208704 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:45:30.052087  208704 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:45:30.052114  208704 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:45:30.052164  208704 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:45:30.052894  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:45:30.074641  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:45:30.096600  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:45:30.122405  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:45:30.145183  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0110 02:45:30.163069  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:45:30.181491  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:45:30.207495  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0110 02:45:30.227619  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:45:30.247757  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:45:30.270519  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:45:30.292053  208704 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:45:30.305407  208704 ssh_runner.go:195] Run: openssl version
	I0110 02:45:30.311715  208704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:45:30.319930  208704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:45:30.328159  208704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:45:30.331621  208704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:45:30.331702  208704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:45:30.378929  208704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:45:30.386482  208704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:45:30.393745  208704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:45:30.401004  208704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:45:30.405094  208704 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:45:30.405193  208704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:45:30.445815  208704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:45:30.453981  208704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:45:30.461767  208704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:45:30.469963  208704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:45:30.474174  208704 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:45:30.474241  208704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:45:30.520829  208704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:45:30.528226  208704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:45:30.531985  208704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:45:30.574396  208704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:45:30.615524  208704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:45:30.656838  208704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:45:30.703388  208704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:45:30.762665  208704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:45:30.837267  208704 kubeadm.go:401] StartCluster: {Name:embed-certs-290628 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-290628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:45:30.837363  208704 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:45:30.837444  208704 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:45:30.893066  208704 cri.go:96] found id: "35fda1be4302381d9f829358ffbd2635d0f9ef32fc82cec42ebc5bb2174d6351"
	I0110 02:45:30.893090  208704 cri.go:96] found id: "ddcfa0a7f69364becc7ff3f5e5d934a47a126035e79baf2f684410b0a13244b8"
	I0110 02:45:30.893095  208704 cri.go:96] found id: "5205ca6cd4481f5c3b3669c60cae1e88fadbf6a0dbed071dfc5ed887d68c1eb6"
	I0110 02:45:30.893098  208704 cri.go:96] found id: "95e8806653f7a62ea44d48caf19c96fe7f8f2f3713d980b9169516d588775029"
	I0110 02:45:30.893108  208704 cri.go:96] found id: ""
	I0110 02:45:30.893182  208704 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 02:45:30.911047  208704 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:45:30Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:45:30.911146  208704 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:45:30.926794  208704 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:45:30.926830  208704 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:45:30.926925  208704 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:45:30.940480  208704 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:45:30.940975  208704 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-290628" does not appear in /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:45:30.941124  208704 kubeconfig.go:62] /home/jenkins/minikube-integration/22414-2353/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-290628" cluster setting kubeconfig missing "embed-certs-290628" context setting]
	I0110 02:45:30.941452  208704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:45:30.942990  208704 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:45:30.952371  208704 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 02:45:30.952405  208704 kubeadm.go:602] duration metric: took 25.568961ms to restartPrimaryControlPlane
	I0110 02:45:30.952452  208704 kubeadm.go:403] duration metric: took 115.175841ms to StartCluster
	I0110 02:45:30.952474  208704 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:45:30.952545  208704 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:45:30.953662  208704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:45:30.953928  208704 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:45:30.954451  208704 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:45:30.954530  208704 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-290628"
	I0110 02:45:30.954561  208704 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-290628"
	W0110 02:45:30.954572  208704 addons.go:248] addon storage-provisioner should already be in state true
	I0110 02:45:30.954594  208704 host.go:66] Checking if "embed-certs-290628" exists ...
	I0110 02:45:30.955118  208704 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Status}}
	I0110 02:45:30.955323  208704 config.go:182] Loaded profile config "embed-certs-290628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:45:30.955392  208704 addons.go:70] Setting dashboard=true in profile "embed-certs-290628"
	I0110 02:45:30.955403  208704 addons.go:239] Setting addon dashboard=true in "embed-certs-290628"
	W0110 02:45:30.955408  208704 addons.go:248] addon dashboard should already be in state true
	I0110 02:45:30.955437  208704 host.go:66] Checking if "embed-certs-290628" exists ...
	I0110 02:45:30.955694  208704 addons.go:70] Setting default-storageclass=true in profile "embed-certs-290628"
	I0110 02:45:30.955713  208704 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-290628"
	I0110 02:45:30.955971  208704 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Status}}
	I0110 02:45:30.956388  208704 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Status}}
	I0110 02:45:30.958319  208704 out.go:179] * Verifying Kubernetes components...
	I0110 02:45:30.961466  208704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:45:31.004947  208704 addons.go:239] Setting addon default-storageclass=true in "embed-certs-290628"
	W0110 02:45:31.004974  208704 addons.go:248] addon default-storageclass should already be in state true
	I0110 02:45:31.004997  208704 host.go:66] Checking if "embed-certs-290628" exists ...
	I0110 02:45:31.005431  208704 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Status}}
	I0110 02:45:31.011218  208704 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:45:31.016206  208704 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:45:31.016240  208704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:45:31.016314  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:31.029898  208704 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 02:45:31.032862  208704 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 02:45:31.035703  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 02:45:31.035737  208704 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 02:45:31.035833  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:31.068308  208704 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:45:31.068336  208704 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:45:31.068399  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:31.095654  208704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:45:31.098874  208704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:45:31.126966  208704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:45:31.330037  208704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:45:31.383902  208704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:45:31.390479  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 02:45:31.390551  208704 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 02:45:31.435194  208704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:45:31.467086  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 02:45:31.467195  208704 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 02:45:31.528500  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 02:45:31.528574  208704 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 02:45:31.587360  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 02:45:31.587379  208704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 02:45:31.629967  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 02:45:31.629987  208704 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 02:45:31.649018  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 02:45:31.649039  208704 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 02:45:31.667116  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 02:45:31.667137  208704 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 02:45:31.690991  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 02:45:31.691066  208704 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 02:45:31.710567  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:45:31.710637  208704 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 02:45:31.732857  208704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:45:35.881919  208704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.551792434s)
	I0110 02:45:35.882028  208704 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.498048051s)
	I0110 02:45:35.882093  208704 node_ready.go:35] waiting up to 6m0s for node "embed-certs-290628" to be "Ready" ...
	I0110 02:45:35.882424  208704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.447157507s)
	I0110 02:45:35.883067  208704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.150120502s)
	I0110 02:45:35.886561  208704 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-290628 addons enable metrics-server
	
	I0110 02:45:35.916421  208704 node_ready.go:49] node "embed-certs-290628" is "Ready"
	I0110 02:45:35.916494  208704 node_ready.go:38] duration metric: took 34.369148ms for node "embed-certs-290628" to be "Ready" ...
	I0110 02:45:35.916522  208704 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:45:35.916605  208704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:45:35.924137  208704 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 02:45:35.927038  208704 addons.go:530] duration metric: took 4.972586741s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 02:45:35.931113  208704 api_server.go:72] duration metric: took 4.977145665s to wait for apiserver process to appear ...
	I0110 02:45:35.931131  208704 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:45:35.931150  208704 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:45:35.939626  208704 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:45:35.940794  208704 api_server.go:141] control plane version: v1.35.0
	I0110 02:45:35.940855  208704 api_server.go:131] duration metric: took 9.713475ms to wait for apiserver health ...
	I0110 02:45:35.940878  208704 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:45:35.945276  208704 system_pods.go:59] 8 kube-system pods found
	I0110 02:45:35.945368  208704 system_pods.go:61] "coredns-7d764666f9-jwjfn" [858d8d57-f89b-4d9b-8aa5-dbf6572c266d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:45:35.945396  208704 system_pods.go:61] "etcd-embed-certs-290628" [7905e553-9138-4822-9649-ee6f3e2eb58d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:45:35.945418  208704 system_pods.go:61] "kindnet-g87jl" [be4cb760-ae30-4fd4-a122-e8e7f957939c] Running
	I0110 02:45:35.945457  208704 system_pods.go:61] "kube-apiserver-embed-certs-290628" [2e5da56c-c5c4-44b7-a1fd-36b46bd328d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:45:35.945485  208704 system_pods.go:61] "kube-controller-manager-embed-certs-290628" [186646b5-8524-41e4-9566-6466f756d103] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:45:35.945521  208704 system_pods.go:61] "kube-proxy-bdvjd" [af84947c-dfb8-478d-a250-f573f1edd0d7] Running
	I0110 02:45:35.945548  208704 system_pods.go:61] "kube-scheduler-embed-certs-290628" [699bf674-d25c-4ebc-8db6-5d58a627de92] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:45:35.945571  208704 system_pods.go:61] "storage-provisioner" [d2832396-2583-470a-a396-7e8bb76186de] Running
	I0110 02:45:35.945606  208704 system_pods.go:74] duration metric: took 4.708023ms to wait for pod list to return data ...
	I0110 02:45:35.945634  208704 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:45:35.948767  208704 default_sa.go:45] found service account: "default"
	I0110 02:45:35.948822  208704 default_sa.go:55] duration metric: took 3.167947ms for default service account to be created ...
	I0110 02:45:35.948855  208704 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:45:35.952717  208704 system_pods.go:86] 8 kube-system pods found
	I0110 02:45:35.952801  208704 system_pods.go:89] "coredns-7d764666f9-jwjfn" [858d8d57-f89b-4d9b-8aa5-dbf6572c266d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:45:35.952829  208704 system_pods.go:89] "etcd-embed-certs-290628" [7905e553-9138-4822-9649-ee6f3e2eb58d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:45:35.952866  208704 system_pods.go:89] "kindnet-g87jl" [be4cb760-ae30-4fd4-a122-e8e7f957939c] Running
	I0110 02:45:35.952896  208704 system_pods.go:89] "kube-apiserver-embed-certs-290628" [2e5da56c-c5c4-44b7-a1fd-36b46bd328d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:45:35.952920  208704 system_pods.go:89] "kube-controller-manager-embed-certs-290628" [186646b5-8524-41e4-9566-6466f756d103] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:45:35.952957  208704 system_pods.go:89] "kube-proxy-bdvjd" [af84947c-dfb8-478d-a250-f573f1edd0d7] Running
	I0110 02:45:35.952984  208704 system_pods.go:89] "kube-scheduler-embed-certs-290628" [699bf674-d25c-4ebc-8db6-5d58a627de92] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:45:35.953007  208704 system_pods.go:89] "storage-provisioner" [d2832396-2583-470a-a396-7e8bb76186de] Running
	I0110 02:45:35.953046  208704 system_pods.go:126] duration metric: took 4.171111ms to wait for k8s-apps to be running ...
	I0110 02:45:35.953075  208704 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:45:35.953157  208704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:45:35.968293  208704 system_svc.go:56] duration metric: took 15.201744ms WaitForService to wait for kubelet
	I0110 02:45:35.968320  208704 kubeadm.go:587] duration metric: took 5.014355548s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:45:35.968344  208704 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:45:35.971386  208704 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:45:35.971420  208704 node_conditions.go:123] node cpu capacity is 2
	I0110 02:45:35.971434  208704 node_conditions.go:105] duration metric: took 3.084248ms to run NodePressure ...
	I0110 02:45:35.971454  208704 start.go:242] waiting for startup goroutines ...
	I0110 02:45:35.971466  208704 start.go:247] waiting for cluster config update ...
	I0110 02:45:35.971480  208704 start.go:256] writing updated cluster config ...
	I0110 02:45:35.971771  208704 ssh_runner.go:195] Run: rm -f paused
	I0110 02:45:35.975428  208704 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:45:35.979443  208704 pod_ready.go:83] waiting for pod "coredns-7d764666f9-jwjfn" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 02:45:37.991540  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:45:40.484362  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:45:42.488081  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:45:44.984522  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:45:47.484647  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:45:49.485270  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:45:51.984822  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:45:54.484468  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:45:56.484797  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:45:58.485302  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:46:00.485337  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:46:02.984487  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:46:05.485425  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:46:07.985045  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:46:09.985084  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:46:12.485028  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:46:14.485098  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	I0110 02:46:16.485049  208704 pod_ready.go:94] pod "coredns-7d764666f9-jwjfn" is "Ready"
	I0110 02:46:16.485079  208704 pod_ready.go:86] duration metric: took 40.505614363s for pod "coredns-7d764666f9-jwjfn" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:16.487585  208704 pod_ready.go:83] waiting for pod "etcd-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:16.491827  208704 pod_ready.go:94] pod "etcd-embed-certs-290628" is "Ready"
	I0110 02:46:16.491852  208704 pod_ready.go:86] duration metric: took 4.244923ms for pod "etcd-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:16.494036  208704 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:16.498298  208704 pod_ready.go:94] pod "kube-apiserver-embed-certs-290628" is "Ready"
	I0110 02:46:16.498357  208704 pod_ready.go:86] duration metric: took 4.294432ms for pod "kube-apiserver-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:16.500558  208704 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:16.683512  208704 pod_ready.go:94] pod "kube-controller-manager-embed-certs-290628" is "Ready"
	I0110 02:46:16.683543  208704 pod_ready.go:86] duration metric: took 182.958771ms for pod "kube-controller-manager-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:16.883495  208704 pod_ready.go:83] waiting for pod "kube-proxy-bdvjd" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:17.283489  208704 pod_ready.go:94] pod "kube-proxy-bdvjd" is "Ready"
	I0110 02:46:17.283518  208704 pod_ready.go:86] duration metric: took 399.996332ms for pod "kube-proxy-bdvjd" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:17.483653  208704 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:17.883678  208704 pod_ready.go:94] pod "kube-scheduler-embed-certs-290628" is "Ready"
	I0110 02:46:17.883709  208704 pod_ready.go:86] duration metric: took 400.028414ms for pod "kube-scheduler-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:17.883722  208704 pod_ready.go:40] duration metric: took 41.908254882s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:46:17.940074  208704 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 02:46:17.944148  208704 out.go:203] 
	W0110 02:46:17.947968  208704 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 02:46:17.951413  208704 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:46:17.955061  208704 out.go:179] * Done! kubectl is now configured to use "embed-certs-290628" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:46:05 embed-certs-290628 crio[662]: time="2026-01-10T02:46:05.454170881Z" level=info msg="Created container 0aae3cd379ca33876ea4a9b0c32378c10261c80e2ab472d4eee964f838bb8955: kube-system/storage-provisioner/storage-provisioner" id=e86e2af4-041d-4295-abb1-51b634c49ef5 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:46:05 embed-certs-290628 crio[662]: time="2026-01-10T02:46:05.455206267Z" level=info msg="Starting container: 0aae3cd379ca33876ea4a9b0c32378c10261c80e2ab472d4eee964f838bb8955" id=cc807a71-c742-40ca-b183-059ba2a849e6 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:46:05 embed-certs-290628 crio[662]: time="2026-01-10T02:46:05.457604437Z" level=info msg="Started container" PID=1701 containerID=0aae3cd379ca33876ea4a9b0c32378c10261c80e2ab472d4eee964f838bb8955 description=kube-system/storage-provisioner/storage-provisioner id=cc807a71-c742-40ca-b183-059ba2a849e6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39247c363004b59b9ae8df507e033d6660e20b7f8b2aa1f7239700a2b28294d2
	Jan 10 02:46:15 embed-certs-290628 crio[662]: time="2026-01-10T02:46:15.23159149Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:46:15 embed-certs-290628 crio[662]: time="2026-01-10T02:46:15.231628051Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:46:15 embed-certs-290628 crio[662]: time="2026-01-10T02:46:15.235900994Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:46:15 embed-certs-290628 crio[662]: time="2026-01-10T02:46:15.235935906Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:46:15 embed-certs-290628 crio[662]: time="2026-01-10T02:46:15.239880464Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:46:15 embed-certs-290628 crio[662]: time="2026-01-10T02:46:15.239913833Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:46:15 embed-certs-290628 crio[662]: time="2026-01-10T02:46:15.239939916Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 10 02:46:15 embed-certs-290628 crio[662]: time="2026-01-10T02:46:15.24398505Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:46:15 embed-certs-290628 crio[662]: time="2026-01-10T02:46:15.244018526Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.273292121Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=29e5e6bb-8817-4500-868a-a95f4d6d78ea name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.274521897Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c6c7ead2-e4c0-4427-b381-1705bf7dadf9 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.275528155Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq/dashboard-metrics-scraper" id=54f0c51d-3ae6-4f23-a86d-594b0c8de887 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.275620715Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.282479877Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.28317571Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.297877786Z" level=info msg="Created container e2653d95359074ec9974d0190d972ff3ef51db7c891e922f11b26f760b02f6e3: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq/dashboard-metrics-scraper" id=54f0c51d-3ae6-4f23-a86d-594b0c8de887 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.298499653Z" level=info msg="Starting container: e2653d95359074ec9974d0190d972ff3ef51db7c891e922f11b26f760b02f6e3" id=a3499546-6009-4fab-910a-c9a48855c676 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.302314754Z" level=info msg="Started container" PID=1772 containerID=e2653d95359074ec9974d0190d972ff3ef51db7c891e922f11b26f760b02f6e3 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq/dashboard-metrics-scraper id=a3499546-6009-4fab-910a-c9a48855c676 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a09055efe4166be1eec9405f9d10dacd82d1c10f96a661bcc7e53089c6a667d1
	Jan 10 02:46:22 embed-certs-290628 conmon[1770]: conmon e2653d95359074ec9974 <ninfo>: container 1772 exited with status 1
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.465876999Z" level=info msg="Removing container: eaf120bff1b3eb57dfad75804087fc1fc5dd7290e8e06b80cf29e676efa2b252" id=42db3b3f-fb26-4392-93fe-ded458d0da07 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.472584567Z" level=info msg="Error loading conmon cgroup of container eaf120bff1b3eb57dfad75804087fc1fc5dd7290e8e06b80cf29e676efa2b252: cgroup deleted" id=42db3b3f-fb26-4392-93fe-ded458d0da07 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.475406694Z" level=info msg="Removed container eaf120bff1b3eb57dfad75804087fc1fc5dd7290e8e06b80cf29e676efa2b252: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq/dashboard-metrics-scraper" id=42db3b3f-fb26-4392-93fe-ded458d0da07 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	e2653d9535907       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago       Exited              dashboard-metrics-scraper   3                   a09055efe4166       dashboard-metrics-scraper-867fb5f87b-7tmmq   kubernetes-dashboard
	0aae3cd379ca3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   39247c363004b       storage-provisioner                          kube-system
	54cdd96263e16       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   49 seconds ago       Running             kubernetes-dashboard        0                   e535147996934       kubernetes-dashboard-b84665fb8-hxqv7         kubernetes-dashboard
	5ad18c1213ce1       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           57 seconds ago       Running             coredns                     1                   b3466e6d60360       coredns-7d764666f9-jwjfn                     kube-system
	3020727c504b0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   7cdfab0e39387       busybox                                      default
	b14c2926b4e74       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   39247c363004b       storage-provisioner                          kube-system
	4e7d5f61e1851       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           57 seconds ago       Running             kube-proxy                  1                   9277e4454eddb       kube-proxy-bdvjd                             kube-system
	cf97d9a5c9164       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           57 seconds ago       Running             kindnet-cni                 1                   11a21720211cb       kindnet-g87jl                                kube-system
	35fda1be43023       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           About a minute ago   Running             kube-scheduler              1                   75e7e9eef20e9       kube-scheduler-embed-certs-290628            kube-system
	ddcfa0a7f6936       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           About a minute ago   Running             kube-controller-manager     1                   4d14a397c21aa       kube-controller-manager-embed-certs-290628   kube-system
	5205ca6cd4481       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           About a minute ago   Running             etcd                        1                   e968b91835da8       etcd-embed-certs-290628                      kube-system
	95e8806653f7a       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           About a minute ago   Running             kube-apiserver              1                   f7303fbd39e0c       kube-apiserver-embed-certs-290628            kube-system
	
	
	==> coredns [5ad18c1213ce1ba79c8b25d06d31b054e5c6d7d41fb47e3deaf5b50002f70222] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:34346 - 44887 "HINFO IN 8182947147099260957.3364710826530644747. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01329856s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-290628
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-290628
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=embed-certs-290628
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_44_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:44:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-290628
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:46:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:46:05 +0000   Sat, 10 Jan 2026 02:44:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:46:05 +0000   Sat, 10 Jan 2026 02:44:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:46:05 +0000   Sat, 10 Jan 2026 02:44:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:46:05 +0000   Sat, 10 Jan 2026 02:44:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-290628
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                d1493119-98e9-4ef8-b2ce-67a3672d1963
	  Boot ID:                    41f52236-76b9-4dc8-bfe3-121fbdeb9659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-7d764666f9-jwjfn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-embed-certs-290628                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         115s
	  kube-system                 kindnet-g87jl                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-embed-certs-290628             250m (12%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-embed-certs-290628    200m (10%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-bdvjd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-embed-certs-290628             100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-7tmmq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-hxqv7          0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  111s  node-controller  Node embed-certs-290628 event: Registered Node embed-certs-290628 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node embed-certs-290628 event: Registered Node embed-certs-290628 in Controller
	
	
	==> dmesg <==
	[Jan10 02:10] overlayfs: idmapped layers are currently not supported
	[Jan10 02:14] overlayfs: idmapped layers are currently not supported
	[ +27.765975] overlayfs: idmapped layers are currently not supported
	[Jan10 02:15] overlayfs: idmapped layers are currently not supported
	[Jan10 02:16] overlayfs: idmapped layers are currently not supported
	[Jan10 02:17] overlayfs: idmapped layers are currently not supported
	[Jan10 02:18] overlayfs: idmapped layers are currently not supported
	[ +23.212409] overlayfs: idmapped layers are currently not supported
	[  +9.866169] overlayfs: idmapped layers are currently not supported
	[Jan10 02:19] overlayfs: idmapped layers are currently not supported
	[Jan10 02:20] overlayfs: idmapped layers are currently not supported
	[ +35.960240] overlayfs: idmapped layers are currently not supported
	[Jan10 02:21] overlayfs: idmapped layers are currently not supported
	[Jan10 02:23] overlayfs: idmapped layers are currently not supported
	[Jan10 02:24] overlayfs: idmapped layers are currently not supported
	[Jan10 02:25] overlayfs: idmapped layers are currently not supported
	[Jan10 02:30] overlayfs: idmapped layers are currently not supported
	[Jan10 02:31] overlayfs: idmapped layers are currently not supported
	[Jan10 02:35] overlayfs: idmapped layers are currently not supported
	[Jan10 02:37] overlayfs: idmapped layers are currently not supported
	[Jan10 02:41] overlayfs: idmapped layers are currently not supported
	[ +37.534021] overlayfs: idmapped layers are currently not supported
	[Jan10 02:43] overlayfs: idmapped layers are currently not supported
	[Jan10 02:44] overlayfs: idmapped layers are currently not supported
	[Jan10 02:45] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5205ca6cd4481f5c3b3669c60cae1e88fadbf6a0dbed071dfc5ed887d68c1eb6] <==
	{"level":"info","ts":"2026-01-10T02:45:31.357738Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:45:31.357747Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:45:31.357938Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:45:31.357949Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:45:31.358936Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T02:45:31.359005Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:45:31.359075Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T02:45:32.115894Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:45:32.116025Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:45:32.116122Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:45:32.116163Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:45:32.116202Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:45:32.121950Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:45:32.122064Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:45:32.122119Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:45:32.122154Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:45:32.124001Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-290628 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:45:32.124253Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:45:32.124406Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:45:32.124545Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:45:32.124582Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:45:32.125538Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:45:32.129817Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T02:45:32.134644Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:45:32.135456Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 02:46:32 up  1:29,  0 user,  load average: 1.04, 1.62, 1.75
	Linux embed-certs-290628 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cf97d9a5c9164ffd276eb59e2c2d5a25f4e245b1724464838763e7794e90f36e] <==
	I0110 02:45:35.042267       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:45:35.042485       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 02:45:35.042598       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:45:35.042611       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:45:35.042620       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:45:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:45:35.227386       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:45:35.227408       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:45:35.227417       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:45:35.227731       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0110 02:46:05.227470       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0110 02:46:05.227472       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0110 02:46:05.227694       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0110 02:46:05.228395       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I0110 02:46:06.728168       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:46:06.728196       1 metrics.go:72] Registering metrics
	I0110 02:46:06.728265       1 controller.go:711] "Syncing nftables rules"
	I0110 02:46:15.226612       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:46:15.226735       1 main.go:301] handling current node
	I0110 02:46:25.227035       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:46:25.227068       1 main.go:301] handling current node
	
	
	==> kube-apiserver [95e8806653f7a62ea44d48caf19c96fe7f8f2f3713d980b9169516d588775029] <==
	I0110 02:45:34.252517       1 aggregator.go:187] initial CRD sync complete...
	I0110 02:45:34.252525       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 02:45:34.252531       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:45:34.252536       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:45:34.260412       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:34.260775       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:34.260808       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0110 02:45:34.260997       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 02:45:34.276082       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 02:45:34.309489       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 02:45:34.312903       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:34.312920       1 policy_source.go:248] refreshing policies
	I0110 02:45:34.351607       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:45:34.368769       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:45:34.880570       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:45:35.378396       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:45:35.526641       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:45:35.591679       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:45:35.623951       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:45:35.775351       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.229.160"}
	I0110 02:45:35.801622       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.154.191"}
	I0110 02:45:37.873594       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:45:37.873642       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:45:37.974855       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:45:38.024100       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ddcfa0a7f69364becc7ff3f5e5d934a47a126035e79baf2f684410b0a13244b8] <==
	I0110 02:45:37.386279       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.386332       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.386434       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 02:45:37.386535       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-290628"
	I0110 02:45:37.386615       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 02:45:37.386680       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.386748       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.386837       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.387033       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.387104       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.387190       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.387243       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.387296       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.390791       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.390963       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.391035       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.391391       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.385287       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:45:37.393395       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.425572       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.488626       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.488745       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:45:37.488759       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:45:37.493609       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.985550       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [4e7d5f61e1851fb6bf16a2740f2dac2f735df2977fb762c6d77ec2fd39e8aa7b] <==
	I0110 02:45:35.337295       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:45:35.466597       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:45:35.569625       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:35.569666       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 02:45:35.569758       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:45:35.687811       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:45:35.690748       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:45:35.720494       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:45:35.721037       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:45:35.721100       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:45:35.740113       1 config.go:200] "Starting service config controller"
	I0110 02:45:35.745641       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:45:35.741297       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:45:35.745701       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:45:35.741742       1 config.go:309] "Starting node config controller"
	I0110 02:45:35.745726       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:45:35.745731       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:45:35.741309       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:45:35.745738       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:45:35.846606       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:45:35.846646       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 02:45:35.856808       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [35fda1be4302381d9f829358ffbd2635d0f9ef32fc82cec42ebc5bb2174d6351] <==
	I0110 02:45:32.229578       1 serving.go:386] Generated self-signed cert in-memory
	W0110 02:45:34.161568       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 02:45:34.161621       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 02:45:34.161632       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 02:45:34.161639       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 02:45:34.274216       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 02:45:34.274251       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:45:34.283123       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 02:45:34.283275       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:45:34.283292       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:45:34.283309       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 02:45:34.388734       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:45:50 embed-certs-290628 kubelet[794]: E0110 02:45:50.380149     794 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-290628" containerName="kube-scheduler"
	Jan 10 02:45:50 embed-certs-290628 kubelet[794]: E0110 02:45:50.383200     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" containerName="dashboard-metrics-scraper"
	Jan 10 02:45:50 embed-certs-290628 kubelet[794]: I0110 02:45:50.383247     794 scope.go:122] "RemoveContainer" containerID="183dab0f7b1b5b7c0e4ec4750b4d61227371e1ee6e2d9327e8a9522c1b5468f5"
	Jan 10 02:45:50 embed-certs-290628 kubelet[794]: E0110 02:45:50.383398     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7tmmq_kubernetes-dashboard(b86ea3de-5b08-4cbd-b9c6-a07f05c7fd98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" podUID="b86ea3de-5b08-4cbd-b9c6-a07f05c7fd98"
	Jan 10 02:45:59 embed-certs-290628 kubelet[794]: E0110 02:45:59.271925     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" containerName="dashboard-metrics-scraper"
	Jan 10 02:45:59 embed-certs-290628 kubelet[794]: I0110 02:45:59.271991     794 scope.go:122] "RemoveContainer" containerID="183dab0f7b1b5b7c0e4ec4750b4d61227371e1ee6e2d9327e8a9522c1b5468f5"
	Jan 10 02:45:59 embed-certs-290628 kubelet[794]: I0110 02:45:59.401068     794 scope.go:122] "RemoveContainer" containerID="183dab0f7b1b5b7c0e4ec4750b4d61227371e1ee6e2d9327e8a9522c1b5468f5"
	Jan 10 02:45:59 embed-certs-290628 kubelet[794]: E0110 02:45:59.403878     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" containerName="dashboard-metrics-scraper"
	Jan 10 02:45:59 embed-certs-290628 kubelet[794]: I0110 02:45:59.404142     794 scope.go:122] "RemoveContainer" containerID="eaf120bff1b3eb57dfad75804087fc1fc5dd7290e8e06b80cf29e676efa2b252"
	Jan 10 02:45:59 embed-certs-290628 kubelet[794]: E0110 02:45:59.404405     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7tmmq_kubernetes-dashboard(b86ea3de-5b08-4cbd-b9c6-a07f05c7fd98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" podUID="b86ea3de-5b08-4cbd-b9c6-a07f05c7fd98"
	Jan 10 02:46:00 embed-certs-290628 kubelet[794]: E0110 02:46:00.406908     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" containerName="dashboard-metrics-scraper"
	Jan 10 02:46:00 embed-certs-290628 kubelet[794]: I0110 02:46:00.406951     794 scope.go:122] "RemoveContainer" containerID="eaf120bff1b3eb57dfad75804087fc1fc5dd7290e8e06b80cf29e676efa2b252"
	Jan 10 02:46:00 embed-certs-290628 kubelet[794]: E0110 02:46:00.407146     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7tmmq_kubernetes-dashboard(b86ea3de-5b08-4cbd-b9c6-a07f05c7fd98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" podUID="b86ea3de-5b08-4cbd-b9c6-a07f05c7fd98"
	Jan 10 02:46:05 embed-certs-290628 kubelet[794]: I0110 02:46:05.420116     794 scope.go:122] "RemoveContainer" containerID="b14c2926b4e747d67085d8e65c5115d40f6a48d7c026c413f61aeaca8a99c96e"
	Jan 10 02:46:16 embed-certs-290628 kubelet[794]: E0110 02:46:16.335869     794 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-jwjfn" containerName="coredns"
	Jan 10 02:46:22 embed-certs-290628 kubelet[794]: E0110 02:46:22.272875     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" containerName="dashboard-metrics-scraper"
	Jan 10 02:46:22 embed-certs-290628 kubelet[794]: I0110 02:46:22.272914     794 scope.go:122] "RemoveContainer" containerID="eaf120bff1b3eb57dfad75804087fc1fc5dd7290e8e06b80cf29e676efa2b252"
	Jan 10 02:46:22 embed-certs-290628 kubelet[794]: I0110 02:46:22.464486     794 scope.go:122] "RemoveContainer" containerID="eaf120bff1b3eb57dfad75804087fc1fc5dd7290e8e06b80cf29e676efa2b252"
	Jan 10 02:46:23 embed-certs-290628 kubelet[794]: E0110 02:46:23.468532     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" containerName="dashboard-metrics-scraper"
	Jan 10 02:46:23 embed-certs-290628 kubelet[794]: I0110 02:46:23.468569     794 scope.go:122] "RemoveContainer" containerID="e2653d95359074ec9974d0190d972ff3ef51db7c891e922f11b26f760b02f6e3"
	Jan 10 02:46:23 embed-certs-290628 kubelet[794]: E0110 02:46:23.468721     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7tmmq_kubernetes-dashboard(b86ea3de-5b08-4cbd-b9c6-a07f05c7fd98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" podUID="b86ea3de-5b08-4cbd-b9c6-a07f05c7fd98"
	Jan 10 02:46:30 embed-certs-290628 kubelet[794]: I0110 02:46:30.160139     794 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 10 02:46:30 embed-certs-290628 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:46:30 embed-certs-290628 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:46:30 embed-certs-290628 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [54cdd96263e164280da6cc533b71f17c8723cb4d97eb22286bfa92df9daa37aa] <==
	2026/01/10 02:45:42 Using namespace: kubernetes-dashboard
	2026/01/10 02:45:42 Using in-cluster config to connect to apiserver
	2026/01/10 02:45:42 Using secret token for csrf signing
	2026/01/10 02:45:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 02:45:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 02:45:42 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 02:45:42 Generating JWE encryption key
	2026/01/10 02:45:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 02:45:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 02:45:43 Initializing JWE encryption key from synchronized object
	2026/01/10 02:45:43 Creating in-cluster Sidecar client
	2026/01/10 02:45:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:45:43 Serving insecurely on HTTP port: 9090
	2026/01/10 02:46:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:45:42 Starting overwatch
	
	
	==> storage-provisioner [0aae3cd379ca33876ea4a9b0c32378c10261c80e2ab472d4eee964f838bb8955] <==
	I0110 02:46:05.486452       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:46:05.486522       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 02:46:05.488695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:08.944391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:13.205092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:16.803053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:19.856108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:22.877858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:22.882659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:46:22.882813       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:46:22.882970       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-290628_aae469b9-1735-440b-ac1f-2aad80c0dff1!
	I0110 02:46:22.883175       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"07064d36-5187-4459-b216-f8310ec76f12", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-290628_aae469b9-1735-440b-ac1f-2aad80c0dff1 became leader
	W0110 02:46:22.891460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:22.895153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:46:22.983378       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-290628_aae469b9-1735-440b-ac1f-2aad80c0dff1!
	W0110 02:46:24.898049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:24.902338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:26.913917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:26.952989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:28.955607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:28.961085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:30.963700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:30.971214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:32.973660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:32.978984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b14c2926b4e747d67085d8e65c5115d40f6a48d7c026c413f61aeaca8a99c96e] <==
	I0110 02:45:35.153917       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 02:46:05.167211       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-290628 -n embed-certs-290628
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-290628 -n embed-certs-290628: exit status 2 (372.089712ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-290628 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-290628
helpers_test.go:244: (dbg) docker inspect embed-certs-290628:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8",
	        "Created": "2026-01-10T02:44:15.973564072Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 208836,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:45:23.645458131Z",
	            "FinishedAt": "2026-01-10T02:45:22.854933331Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8/hosts",
	        "LogPath": "/var/lib/docker/containers/23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8/23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8-json.log",
	        "Name": "/embed-certs-290628",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-290628:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-290628",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "23cbe0d69bf1f36f3e571133519e268b4644b805d56bd2cad7573cc2c5daf4e8",
	                "LowerDir": "/var/lib/docker/overlay2/ed28b0ecc2316eb191e9de699f8d778d307925e07bfe86b1703fc76d49ce1266-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ed28b0ecc2316eb191e9de699f8d778d307925e07bfe86b1703fc76d49ce1266/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ed28b0ecc2316eb191e9de699f8d778d307925e07bfe86b1703fc76d49ce1266/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ed28b0ecc2316eb191e9de699f8d778d307925e07bfe86b1703fc76d49ce1266/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-290628",
	                "Source": "/var/lib/docker/volumes/embed-certs-290628/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-290628",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-290628",
	                "name.minikube.sigs.k8s.io": "embed-certs-290628",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ea693768ce2a0097f8409c3b07da187b84cbfe2e126cf690d52ecf42dc7ea9d0",
	            "SandboxKey": "/var/run/docker/netns/ea693768ce2a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-290628": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:b8:e0:54:27:1f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ce2734d3261bfe1e9933e2600de541689c00d14ccf15e6611f60408ce1c3af3",
	                    "EndpointID": "fc15b62ebdf59e6ba1b7606af080b3020521f26512c3518130f7c26afff767e6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-290628",
	                        "23cbe0d69bf1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-290628 -n embed-certs-290628
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-290628 -n embed-certs-290628: exit status 2 (342.84667ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-290628 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-290628 logs -n 25: (1.236460755s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p cert-expiration-213257 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:36 UTC │ 10 Jan 26 02:37 UTC │
	│ start   │ -p cert-expiration-213257 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ delete  │ -p cert-expiration-213257                                                                                                                                                                                                                     │ cert-expiration-213257    │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ start   │ -p force-systemd-flag-038359 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-038359 │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │                     │
	│ delete  │ -p force-systemd-env-088457                                                                                                                                                                                                                   │ force-systemd-env-088457  │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:40 UTC │
	│ start   │ -p cert-options-295914 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:41 UTC │
	│ ssh     │ cert-options-295914 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ ssh     │ -p cert-options-295914 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ delete  │ -p cert-options-295914                                                                                                                                                                                                                        │ cert-options-295914       │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ start   │ -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:42 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-736081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │                     │
	│ stop    │ -p old-k8s-version-736081 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-736081 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:42 UTC │
	│ start   │ -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:43 UTC │
	│ image   │ old-k8s-version-736081 image list --format=json                                                                                                                                                                                               │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ pause   │ -p old-k8s-version-736081 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │                     │
	│ delete  │ -p old-k8s-version-736081                                                                                                                                                                                                                     │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ delete  │ -p old-k8s-version-736081                                                                                                                                                                                                                     │ old-k8s-version-736081    │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ start   │ -p embed-certs-290628 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-290628        │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ addons  │ enable metrics-server -p embed-certs-290628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-290628        │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │                     │
	│ stop    │ -p embed-certs-290628 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-290628        │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:45 UTC │
	│ addons  │ enable dashboard -p embed-certs-290628 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-290628        │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:45 UTC │
	│ start   │ -p embed-certs-290628 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-290628        │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:46 UTC │
	│ image   │ embed-certs-290628 image list --format=json                                                                                                                                                                                                   │ embed-certs-290628        │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ pause   │ -p embed-certs-290628 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-290628        │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:45:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:45:23.381410  208704 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:45:23.381597  208704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:45:23.381624  208704 out.go:374] Setting ErrFile to fd 2...
	I0110 02:45:23.381648  208704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:45:23.381952  208704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:45:23.382417  208704 out.go:368] Setting JSON to false
	I0110 02:45:23.383287  208704 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5273,"bootTime":1768007851,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:45:23.383380  208704 start.go:143] virtualization:  
	I0110 02:45:23.386669  208704 out.go:179] * [embed-certs-290628] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:45:23.390561  208704 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:45:23.390641  208704 notify.go:221] Checking for updates...
	I0110 02:45:23.394415  208704 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:45:23.397350  208704 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:45:23.400205  208704 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:45:23.403114  208704 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:45:23.406025  208704 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:45:23.409416  208704 config.go:182] Loaded profile config "embed-certs-290628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:45:23.410022  208704 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:45:23.434456  208704 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:45:23.434555  208704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:45:23.497452  208704 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:45:23.488538143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:45:23.497573  208704 docker.go:319] overlay module found
	I0110 02:45:23.500815  208704 out.go:179] * Using the docker driver based on existing profile
	I0110 02:45:23.503661  208704 start.go:309] selected driver: docker
	I0110 02:45:23.503689  208704 start.go:928] validating driver "docker" against &{Name:embed-certs-290628 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-290628 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:45:23.503788  208704 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:45:23.504486  208704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:45:23.561052  208704 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:45:23.552226285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:45:23.561369  208704 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:45:23.561404  208704 cni.go:84] Creating CNI manager for ""
	I0110 02:45:23.561460  208704 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:45:23.561506  208704 start.go:353] cluster config:
	{Name:embed-certs-290628 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-290628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:45:23.564598  208704 out.go:179] * Starting "embed-certs-290628" primary control-plane node in "embed-certs-290628" cluster
	I0110 02:45:23.567211  208704 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:45:23.569972  208704 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:45:23.572715  208704 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:45:23.572758  208704 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 02:45:23.572769  208704 cache.go:65] Caching tarball of preloaded images
	I0110 02:45:23.572799  208704 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:45:23.572855  208704 preload.go:251] Found /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 02:45:23.572866  208704 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:45:23.572973  208704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/config.json ...
	I0110 02:45:23.591903  208704 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:45:23.591926  208704 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:45:23.591947  208704 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:45:23.591976  208704 start.go:360] acquireMachinesLock for embed-certs-290628: {Name:mkecc1830917e603b9fb1bffd9b396deb689a507 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:45:23.592033  208704 start.go:364] duration metric: took 36.586µs to acquireMachinesLock for "embed-certs-290628"
	I0110 02:45:23.592058  208704 start.go:96] Skipping create...Using existing machine configuration
	I0110 02:45:23.592066  208704 fix.go:54] fixHost starting: 
	I0110 02:45:23.592320  208704 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Status}}
	I0110 02:45:23.608911  208704 fix.go:112] recreateIfNeeded on embed-certs-290628: state=Stopped err=<nil>
	W0110 02:45:23.608941  208704 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 02:45:23.612063  208704 out.go:252] * Restarting existing docker container for "embed-certs-290628" ...
	I0110 02:45:23.612143  208704 cli_runner.go:164] Run: docker start embed-certs-290628
	I0110 02:45:23.878801  208704 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Status}}
	I0110 02:45:23.904662  208704 kic.go:430] container "embed-certs-290628" state is running.
	I0110 02:45:23.905026  208704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-290628
	I0110 02:45:23.927476  208704 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/config.json ...
	I0110 02:45:23.927692  208704 machine.go:94] provisionDockerMachine start ...
	I0110 02:45:23.927762  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:23.946762  208704 main.go:144] libmachine: Using SSH client type: native
	I0110 02:45:23.947209  208704 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0110 02:45:23.947284  208704 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:45:23.949153  208704 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 02:45:27.095193  208704 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-290628
	
	I0110 02:45:27.095219  208704 ubuntu.go:182] provisioning hostname "embed-certs-290628"
	I0110 02:45:27.095285  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:27.113447  208704 main.go:144] libmachine: Using SSH client type: native
	I0110 02:45:27.113748  208704 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0110 02:45:27.113765  208704 main.go:144] libmachine: About to run SSH command:
	sudo hostname embed-certs-290628 && echo "embed-certs-290628" | sudo tee /etc/hostname
	I0110 02:45:27.267775  208704 main.go:144] libmachine: SSH cmd err, output: <nil>: embed-certs-290628
	
	I0110 02:45:27.267891  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:27.285865  208704 main.go:144] libmachine: Using SSH client type: native
	I0110 02:45:27.286183  208704 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0110 02:45:27.286207  208704 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-290628' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-290628/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-290628' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:45:27.431932  208704 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:45:27.431962  208704 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 02:45:27.431980  208704 ubuntu.go:190] setting up certificates
	I0110 02:45:27.431989  208704 provision.go:84] configureAuth start
	I0110 02:45:27.432048  208704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-290628
	I0110 02:45:27.453438  208704 provision.go:143] copyHostCerts
	I0110 02:45:27.453508  208704 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem, removing ...
	I0110 02:45:27.453516  208704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:45:27.453591  208704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 02:45:27.453690  208704 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem, removing ...
	I0110 02:45:27.453696  208704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:45:27.453720  208704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 02:45:27.453771  208704 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem, removing ...
	I0110 02:45:27.453775  208704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:45:27.453798  208704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 02:45:27.453841  208704 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.embed-certs-290628 san=[127.0.0.1 192.168.76.2 embed-certs-290628 localhost minikube]
	I0110 02:45:27.510022  208704 provision.go:177] copyRemoteCerts
	I0110 02:45:27.510086  208704 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:45:27.510166  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:27.529229  208704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:45:27.636046  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:45:27.652825  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0110 02:45:27.669764  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:45:27.686693  208704 provision.go:87] duration metric: took 254.684325ms to configureAuth
	I0110 02:45:27.686720  208704 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:45:27.686911  208704 config.go:182] Loaded profile config "embed-certs-290628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:45:27.687060  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:27.704250  208704 main.go:144] libmachine: Using SSH client type: native
	I0110 02:45:27.704562  208704 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0110 02:45:27.704583  208704 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:45:28.054268  208704 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:45:28.054339  208704 machine.go:97] duration metric: took 4.126638028s to provisionDockerMachine
	I0110 02:45:28.054367  208704 start.go:293] postStartSetup for "embed-certs-290628" (driver="docker")
	I0110 02:45:28.054410  208704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:45:28.054498  208704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:45:28.054572  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:28.077456  208704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:45:28.186248  208704 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:45:28.190232  208704 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:45:28.190255  208704 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:45:28.190266  208704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:45:28.190318  208704 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:45:28.190389  208704 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:45:28.190491  208704 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:45:28.200225  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:45:28.226621  208704 start.go:296] duration metric: took 172.211907ms for postStartSetup
	I0110 02:45:28.226714  208704 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:45:28.226781  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:28.246593  208704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:45:28.349043  208704 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:45:28.353809  208704 fix.go:56] duration metric: took 4.761736828s for fixHost
	I0110 02:45:28.353877  208704 start.go:83] releasing machines lock for "embed-certs-290628", held for 4.761829896s
	I0110 02:45:28.353953  208704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-290628
	I0110 02:45:28.370877  208704 ssh_runner.go:195] Run: cat /version.json
	I0110 02:45:28.370932  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:28.371194  208704 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:45:28.371246  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:28.395815  208704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:45:28.398432  208704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:45:28.605853  208704 ssh_runner.go:195] Run: systemctl --version
	I0110 02:45:28.612103  208704 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:45:28.644933  208704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:45:28.649084  208704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:45:28.649158  208704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:45:28.656627  208704 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:45:28.656649  208704 start.go:496] detecting cgroup driver to use...
	I0110 02:45:28.656706  208704 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:45:28.656766  208704 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:45:28.671582  208704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:45:28.684400  208704 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:45:28.684462  208704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:45:28.699538  208704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:45:28.712172  208704 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:45:28.829218  208704 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:45:28.947730  208704 docker.go:234] disabling docker service ...
	I0110 02:45:28.947860  208704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:45:28.962478  208704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:45:28.977737  208704 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:45:29.091632  208704 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:45:29.208134  208704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:45:29.220570  208704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:45:29.234330  208704 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:45:29.234395  208704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:45:29.242754  208704 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:45:29.242825  208704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:45:29.251692  208704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:45:29.260571  208704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:45:29.269216  208704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:45:29.277572  208704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:45:29.286412  208704 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:45:29.295240  208704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:45:29.303834  208704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:45:29.311061  208704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:45:29.318373  208704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:45:29.426682  208704 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:45:29.586651  208704 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:45:29.586730  208704 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:45:29.590506  208704 start.go:574] Will wait 60s for crictl version
	I0110 02:45:29.590613  208704 ssh_runner.go:195] Run: which crictl
	I0110 02:45:29.594010  208704 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:45:29.618465  208704 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:45:29.618622  208704 ssh_runner.go:195] Run: crio --version
	I0110 02:45:29.644106  208704 ssh_runner.go:195] Run: crio --version
	I0110 02:45:29.677232  208704 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:45:29.680124  208704 cli_runner.go:164] Run: docker network inspect embed-certs-290628 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:45:29.698469  208704 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:45:29.702667  208704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:45:29.711719  208704 kubeadm.go:884] updating cluster {Name:embed-certs-290628 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-290628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:45:29.711933  208704 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:45:29.711986  208704 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:45:29.749560  208704 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:45:29.749582  208704 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:45:29.749641  208704 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:45:29.775093  208704 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:45:29.775115  208704 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:45:29.775124  208704 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 02:45:29.775256  208704 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-290628 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:embed-certs-290628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:45:29.775336  208704 ssh_runner.go:195] Run: crio config
	I0110 02:45:29.843086  208704 cni.go:84] Creating CNI manager for ""
	I0110 02:45:29.843153  208704 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:45:29.843203  208704 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:45:29.843244  208704 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-290628 NodeName:embed-certs-290628 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:45:29.843453  208704 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-290628"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:45:29.843571  208704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:45:29.851951  208704 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:45:29.852059  208704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:45:29.860227  208704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0110 02:45:29.872996  208704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:45:29.884962  208704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2235 bytes)
	I0110 02:45:29.896753  208704 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:45:29.900204  208704 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:45:29.909553  208704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:45:30.033320  208704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:45:30.051377  208704 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628 for IP: 192.168.76.2
	I0110 02:45:30.051397  208704 certs.go:195] generating shared ca certs ...
	I0110 02:45:30.051414  208704 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:45:30.051570  208704 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:45:30.051612  208704 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:45:30.051620  208704 certs.go:257] generating profile certs ...
	I0110 02:45:30.051711  208704 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/client.key
	I0110 02:45:30.051785  208704 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/apiserver.key.4427bfdd
	I0110 02:45:30.051867  208704 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/proxy-client.key
	I0110 02:45:30.051987  208704 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:45:30.052024  208704 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:45:30.052032  208704 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:45:30.052058  208704 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:45:30.052087  208704 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:45:30.052114  208704 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:45:30.052164  208704 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:45:30.052894  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:45:30.074641  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:45:30.096600  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:45:30.122405  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:45:30.145183  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0110 02:45:30.163069  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:45:30.181491  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:45:30.207495  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/embed-certs-290628/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I0110 02:45:30.227619  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:45:30.247757  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:45:30.270519  208704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:45:30.292053  208704 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:45:30.305407  208704 ssh_runner.go:195] Run: openssl version
	I0110 02:45:30.311715  208704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:45:30.319930  208704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:45:30.328159  208704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:45:30.331621  208704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:45:30.331702  208704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:45:30.378929  208704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:45:30.386482  208704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:45:30.393745  208704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:45:30.401004  208704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:45:30.405094  208704 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:45:30.405193  208704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:45:30.445815  208704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:45:30.453981  208704 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:45:30.461767  208704 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:45:30.469963  208704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:45:30.474174  208704 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:45:30.474241  208704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:45:30.520829  208704 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:45:30.528226  208704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:45:30.531985  208704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:45:30.574396  208704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:45:30.615524  208704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:45:30.656838  208704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:45:30.703388  208704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:45:30.762665  208704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:45:30.837267  208704 kubeadm.go:401] StartCluster: {Name:embed-certs-290628 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-290628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:45:30.837363  208704 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:45:30.837444  208704 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:45:30.893066  208704 cri.go:96] found id: "35fda1be4302381d9f829358ffbd2635d0f9ef32fc82cec42ebc5bb2174d6351"
	I0110 02:45:30.893090  208704 cri.go:96] found id: "ddcfa0a7f69364becc7ff3f5e5d934a47a126035e79baf2f684410b0a13244b8"
	I0110 02:45:30.893095  208704 cri.go:96] found id: "5205ca6cd4481f5c3b3669c60cae1e88fadbf6a0dbed071dfc5ed887d68c1eb6"
	I0110 02:45:30.893098  208704 cri.go:96] found id: "95e8806653f7a62ea44d48caf19c96fe7f8f2f3713d980b9169516d588775029"
	I0110 02:45:30.893108  208704 cri.go:96] found id: ""
	I0110 02:45:30.893182  208704 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 02:45:30.911047  208704 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:45:30Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:45:30.911146  208704 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:45:30.926794  208704 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:45:30.926830  208704 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:45:30.926925  208704 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:45:30.940480  208704 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:45:30.940975  208704 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-290628" does not appear in /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:45:30.941124  208704 kubeconfig.go:62] /home/jenkins/minikube-integration/22414-2353/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-290628" cluster setting kubeconfig missing "embed-certs-290628" context setting]
	I0110 02:45:30.941452  208704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:45:30.942990  208704 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:45:30.952371  208704 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 02:45:30.952405  208704 kubeadm.go:602] duration metric: took 25.568961ms to restartPrimaryControlPlane
	I0110 02:45:30.952452  208704 kubeadm.go:403] duration metric: took 115.175841ms to StartCluster
	I0110 02:45:30.952474  208704 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:45:30.952545  208704 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:45:30.953662  208704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:45:30.953928  208704 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:45:30.954451  208704 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:45:30.954530  208704 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-290628"
	I0110 02:45:30.954561  208704 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-290628"
	W0110 02:45:30.954572  208704 addons.go:248] addon storage-provisioner should already be in state true
	I0110 02:45:30.954594  208704 host.go:66] Checking if "embed-certs-290628" exists ...
	I0110 02:45:30.955118  208704 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Status}}
	I0110 02:45:30.955323  208704 config.go:182] Loaded profile config "embed-certs-290628": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:45:30.955392  208704 addons.go:70] Setting dashboard=true in profile "embed-certs-290628"
	I0110 02:45:30.955403  208704 addons.go:239] Setting addon dashboard=true in "embed-certs-290628"
	W0110 02:45:30.955408  208704 addons.go:248] addon dashboard should already be in state true
	I0110 02:45:30.955437  208704 host.go:66] Checking if "embed-certs-290628" exists ...
	I0110 02:45:30.955694  208704 addons.go:70] Setting default-storageclass=true in profile "embed-certs-290628"
	I0110 02:45:30.955713  208704 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-290628"
	I0110 02:45:30.955971  208704 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Status}}
	I0110 02:45:30.956388  208704 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Status}}
	I0110 02:45:30.958319  208704 out.go:179] * Verifying Kubernetes components...
	I0110 02:45:30.961466  208704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:45:31.004947  208704 addons.go:239] Setting addon default-storageclass=true in "embed-certs-290628"
	W0110 02:45:31.004974  208704 addons.go:248] addon default-storageclass should already be in state true
	I0110 02:45:31.004997  208704 host.go:66] Checking if "embed-certs-290628" exists ...
	I0110 02:45:31.005431  208704 cli_runner.go:164] Run: docker container inspect embed-certs-290628 --format={{.State.Status}}
	I0110 02:45:31.011218  208704 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:45:31.016206  208704 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:45:31.016240  208704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:45:31.016314  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:31.029898  208704 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 02:45:31.032862  208704 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 02:45:31.035703  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 02:45:31.035737  208704 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 02:45:31.035833  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:31.068308  208704 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:45:31.068336  208704 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:45:31.068399  208704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-290628
	I0110 02:45:31.095654  208704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:45:31.098874  208704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:45:31.126966  208704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/embed-certs-290628/id_rsa Username:docker}
	I0110 02:45:31.330037  208704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:45:31.383902  208704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:45:31.390479  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 02:45:31.390551  208704 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 02:45:31.435194  208704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:45:31.467086  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 02:45:31.467195  208704 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 02:45:31.528500  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 02:45:31.528574  208704 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 02:45:31.587360  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 02:45:31.587379  208704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 02:45:31.629967  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 02:45:31.629987  208704 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 02:45:31.649018  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 02:45:31.649039  208704 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 02:45:31.667116  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 02:45:31.667137  208704 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 02:45:31.690991  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 02:45:31.691066  208704 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 02:45:31.710567  208704 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:45:31.710637  208704 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 02:45:31.732857  208704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:45:35.881919  208704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.551792434s)
	I0110 02:45:35.882028  208704 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.498048051s)
	I0110 02:45:35.882093  208704 node_ready.go:35] waiting up to 6m0s for node "embed-certs-290628" to be "Ready" ...
	I0110 02:45:35.882424  208704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.447157507s)
	I0110 02:45:35.883067  208704 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.150120502s)
	I0110 02:45:35.886561  208704 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-290628 addons enable metrics-server
	
	I0110 02:45:35.916421  208704 node_ready.go:49] node "embed-certs-290628" is "Ready"
	I0110 02:45:35.916494  208704 node_ready.go:38] duration metric: took 34.369148ms for node "embed-certs-290628" to be "Ready" ...
	I0110 02:45:35.916522  208704 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:45:35.916605  208704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:45:35.924137  208704 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 02:45:35.927038  208704 addons.go:530] duration metric: took 4.972586741s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 02:45:35.931113  208704 api_server.go:72] duration metric: took 4.977145665s to wait for apiserver process to appear ...
	I0110 02:45:35.931131  208704 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:45:35.931150  208704 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:45:35.939626  208704 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:45:35.940794  208704 api_server.go:141] control plane version: v1.35.0
	I0110 02:45:35.940855  208704 api_server.go:131] duration metric: took 9.713475ms to wait for apiserver health ...
	I0110 02:45:35.940878  208704 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:45:35.945276  208704 system_pods.go:59] 8 kube-system pods found
	I0110 02:45:35.945368  208704 system_pods.go:61] "coredns-7d764666f9-jwjfn" [858d8d57-f89b-4d9b-8aa5-dbf6572c266d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:45:35.945396  208704 system_pods.go:61] "etcd-embed-certs-290628" [7905e553-9138-4822-9649-ee6f3e2eb58d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:45:35.945418  208704 system_pods.go:61] "kindnet-g87jl" [be4cb760-ae30-4fd4-a122-e8e7f957939c] Running
	I0110 02:45:35.945457  208704 system_pods.go:61] "kube-apiserver-embed-certs-290628" [2e5da56c-c5c4-44b7-a1fd-36b46bd328d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:45:35.945485  208704 system_pods.go:61] "kube-controller-manager-embed-certs-290628" [186646b5-8524-41e4-9566-6466f756d103] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:45:35.945521  208704 system_pods.go:61] "kube-proxy-bdvjd" [af84947c-dfb8-478d-a250-f573f1edd0d7] Running
	I0110 02:45:35.945548  208704 system_pods.go:61] "kube-scheduler-embed-certs-290628" [699bf674-d25c-4ebc-8db6-5d58a627de92] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:45:35.945571  208704 system_pods.go:61] "storage-provisioner" [d2832396-2583-470a-a396-7e8bb76186de] Running
	I0110 02:45:35.945606  208704 system_pods.go:74] duration metric: took 4.708023ms to wait for pod list to return data ...
	I0110 02:45:35.945634  208704 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:45:35.948767  208704 default_sa.go:45] found service account: "default"
	I0110 02:45:35.948822  208704 default_sa.go:55] duration metric: took 3.167947ms for default service account to be created ...
	I0110 02:45:35.948855  208704 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:45:35.952717  208704 system_pods.go:86] 8 kube-system pods found
	I0110 02:45:35.952801  208704 system_pods.go:89] "coredns-7d764666f9-jwjfn" [858d8d57-f89b-4d9b-8aa5-dbf6572c266d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:45:35.952829  208704 system_pods.go:89] "etcd-embed-certs-290628" [7905e553-9138-4822-9649-ee6f3e2eb58d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:45:35.952866  208704 system_pods.go:89] "kindnet-g87jl" [be4cb760-ae30-4fd4-a122-e8e7f957939c] Running
	I0110 02:45:35.952896  208704 system_pods.go:89] "kube-apiserver-embed-certs-290628" [2e5da56c-c5c4-44b7-a1fd-36b46bd328d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:45:35.952920  208704 system_pods.go:89] "kube-controller-manager-embed-certs-290628" [186646b5-8524-41e4-9566-6466f756d103] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:45:35.952957  208704 system_pods.go:89] "kube-proxy-bdvjd" [af84947c-dfb8-478d-a250-f573f1edd0d7] Running
	I0110 02:45:35.952984  208704 system_pods.go:89] "kube-scheduler-embed-certs-290628" [699bf674-d25c-4ebc-8db6-5d58a627de92] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:45:35.953007  208704 system_pods.go:89] "storage-provisioner" [d2832396-2583-470a-a396-7e8bb76186de] Running
	I0110 02:45:35.953046  208704 system_pods.go:126] duration metric: took 4.171111ms to wait for k8s-apps to be running ...
	I0110 02:45:35.953075  208704 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:45:35.953157  208704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:45:35.968293  208704 system_svc.go:56] duration metric: took 15.201744ms WaitForService to wait for kubelet
	I0110 02:45:35.968320  208704 kubeadm.go:587] duration metric: took 5.014355548s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:45:35.968344  208704 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:45:35.971386  208704 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:45:35.971420  208704 node_conditions.go:123] node cpu capacity is 2
	I0110 02:45:35.971434  208704 node_conditions.go:105] duration metric: took 3.084248ms to run NodePressure ...
	I0110 02:45:35.971454  208704 start.go:242] waiting for startup goroutines ...
	I0110 02:45:35.971466  208704 start.go:247] waiting for cluster config update ...
	I0110 02:45:35.971480  208704 start.go:256] writing updated cluster config ...
	I0110 02:45:35.971771  208704 ssh_runner.go:195] Run: rm -f paused
	I0110 02:45:35.975428  208704 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:45:35.979443  208704 pod_ready.go:83] waiting for pod "coredns-7d764666f9-jwjfn" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 02:45:37.991540  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:45:40.484362  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:45:42.488081  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:45:44.984522  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:45:47.484647  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:45:49.485270  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:45:51.984822  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:45:54.484468  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:45:56.484797  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:45:58.485302  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:46:00.485337  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:46:02.984487  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:46:05.485425  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:46:07.985045  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:46:09.985084  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:46:12.485028  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	W0110 02:46:14.485098  208704 pod_ready.go:104] pod "coredns-7d764666f9-jwjfn" is not "Ready", error: <nil>
	I0110 02:46:16.485049  208704 pod_ready.go:94] pod "coredns-7d764666f9-jwjfn" is "Ready"
	I0110 02:46:16.485079  208704 pod_ready.go:86] duration metric: took 40.505614363s for pod "coredns-7d764666f9-jwjfn" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:16.487585  208704 pod_ready.go:83] waiting for pod "etcd-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:16.491827  208704 pod_ready.go:94] pod "etcd-embed-certs-290628" is "Ready"
	I0110 02:46:16.491852  208704 pod_ready.go:86] duration metric: took 4.244923ms for pod "etcd-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:16.494036  208704 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:16.498298  208704 pod_ready.go:94] pod "kube-apiserver-embed-certs-290628" is "Ready"
	I0110 02:46:16.498357  208704 pod_ready.go:86] duration metric: took 4.294432ms for pod "kube-apiserver-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:16.500558  208704 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:16.683512  208704 pod_ready.go:94] pod "kube-controller-manager-embed-certs-290628" is "Ready"
	I0110 02:46:16.683543  208704 pod_ready.go:86] duration metric: took 182.958771ms for pod "kube-controller-manager-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:16.883495  208704 pod_ready.go:83] waiting for pod "kube-proxy-bdvjd" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:17.283489  208704 pod_ready.go:94] pod "kube-proxy-bdvjd" is "Ready"
	I0110 02:46:17.283518  208704 pod_ready.go:86] duration metric: took 399.996332ms for pod "kube-proxy-bdvjd" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:17.483653  208704 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:17.883678  208704 pod_ready.go:94] pod "kube-scheduler-embed-certs-290628" is "Ready"
	I0110 02:46:17.883709  208704 pod_ready.go:86] duration metric: took 400.028414ms for pod "kube-scheduler-embed-certs-290628" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:46:17.883722  208704 pod_ready.go:40] duration metric: took 41.908254882s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:46:17.940074  208704 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 02:46:17.944148  208704 out.go:203] 
	W0110 02:46:17.947968  208704 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 02:46:17.951413  208704 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:46:17.955061  208704 out.go:179] * Done! kubectl is now configured to use "embed-certs-290628" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:46:05 embed-certs-290628 crio[662]: time="2026-01-10T02:46:05.454170881Z" level=info msg="Created container 0aae3cd379ca33876ea4a9b0c32378c10261c80e2ab472d4eee964f838bb8955: kube-system/storage-provisioner/storage-provisioner" id=e86e2af4-041d-4295-abb1-51b634c49ef5 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:46:05 embed-certs-290628 crio[662]: time="2026-01-10T02:46:05.455206267Z" level=info msg="Starting container: 0aae3cd379ca33876ea4a9b0c32378c10261c80e2ab472d4eee964f838bb8955" id=cc807a71-c742-40ca-b183-059ba2a849e6 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:46:05 embed-certs-290628 crio[662]: time="2026-01-10T02:46:05.457604437Z" level=info msg="Started container" PID=1701 containerID=0aae3cd379ca33876ea4a9b0c32378c10261c80e2ab472d4eee964f838bb8955 description=kube-system/storage-provisioner/storage-provisioner id=cc807a71-c742-40ca-b183-059ba2a849e6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=39247c363004b59b9ae8df507e033d6660e20b7f8b2aa1f7239700a2b28294d2
	Jan 10 02:46:15 embed-certs-290628 crio[662]: time="2026-01-10T02:46:15.23159149Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:46:15 embed-certs-290628 crio[662]: time="2026-01-10T02:46:15.231628051Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:46:15 embed-certs-290628 crio[662]: time="2026-01-10T02:46:15.235900994Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:46:15 embed-certs-290628 crio[662]: time="2026-01-10T02:46:15.235935906Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:46:15 embed-certs-290628 crio[662]: time="2026-01-10T02:46:15.239880464Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:46:15 embed-certs-290628 crio[662]: time="2026-01-10T02:46:15.239913833Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:46:15 embed-certs-290628 crio[662]: time="2026-01-10T02:46:15.239939916Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 10 02:46:15 embed-certs-290628 crio[662]: time="2026-01-10T02:46:15.24398505Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:46:15 embed-certs-290628 crio[662]: time="2026-01-10T02:46:15.244018526Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.273292121Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=29e5e6bb-8817-4500-868a-a95f4d6d78ea name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.274521897Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c6c7ead2-e4c0-4427-b381-1705bf7dadf9 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.275528155Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq/dashboard-metrics-scraper" id=54f0c51d-3ae6-4f23-a86d-594b0c8de887 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.275620715Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.282479877Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.28317571Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.297877786Z" level=info msg="Created container e2653d95359074ec9974d0190d972ff3ef51db7c891e922f11b26f760b02f6e3: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq/dashboard-metrics-scraper" id=54f0c51d-3ae6-4f23-a86d-594b0c8de887 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.298499653Z" level=info msg="Starting container: e2653d95359074ec9974d0190d972ff3ef51db7c891e922f11b26f760b02f6e3" id=a3499546-6009-4fab-910a-c9a48855c676 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.302314754Z" level=info msg="Started container" PID=1772 containerID=e2653d95359074ec9974d0190d972ff3ef51db7c891e922f11b26f760b02f6e3 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq/dashboard-metrics-scraper id=a3499546-6009-4fab-910a-c9a48855c676 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a09055efe4166be1eec9405f9d10dacd82d1c10f96a661bcc7e53089c6a667d1
	Jan 10 02:46:22 embed-certs-290628 conmon[1770]: conmon e2653d95359074ec9974 <ninfo>: container 1772 exited with status 1
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.465876999Z" level=info msg="Removing container: eaf120bff1b3eb57dfad75804087fc1fc5dd7290e8e06b80cf29e676efa2b252" id=42db3b3f-fb26-4392-93fe-ded458d0da07 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.472584567Z" level=info msg="Error loading conmon cgroup of container eaf120bff1b3eb57dfad75804087fc1fc5dd7290e8e06b80cf29e676efa2b252: cgroup deleted" id=42db3b3f-fb26-4392-93fe-ded458d0da07 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:46:22 embed-certs-290628 crio[662]: time="2026-01-10T02:46:22.475406694Z" level=info msg="Removed container eaf120bff1b3eb57dfad75804087fc1fc5dd7290e8e06b80cf29e676efa2b252: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq/dashboard-metrics-scraper" id=42db3b3f-fb26-4392-93fe-ded458d0da07 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	e2653d9535907       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago       Exited              dashboard-metrics-scraper   3                   a09055efe4166       dashboard-metrics-scraper-867fb5f87b-7tmmq   kubernetes-dashboard
	0aae3cd379ca3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           29 seconds ago       Running             storage-provisioner         2                   39247c363004b       storage-provisioner                          kube-system
	54cdd96263e16       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   51 seconds ago       Running             kubernetes-dashboard        0                   e535147996934       kubernetes-dashboard-b84665fb8-hxqv7         kubernetes-dashboard
	5ad18c1213ce1       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           59 seconds ago       Running             coredns                     1                   b3466e6d60360       coredns-7d764666f9-jwjfn                     kube-system
	3020727c504b0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           59 seconds ago       Running             busybox                     1                   7cdfab0e39387       busybox                                      default
	b14c2926b4e74       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   39247c363004b       storage-provisioner                          kube-system
	4e7d5f61e1851       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           59 seconds ago       Running             kube-proxy                  1                   9277e4454eddb       kube-proxy-bdvjd                             kube-system
	cf97d9a5c9164       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           59 seconds ago       Running             kindnet-cni                 1                   11a21720211cb       kindnet-g87jl                                kube-system
	35fda1be43023       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           About a minute ago   Running             kube-scheduler              1                   75e7e9eef20e9       kube-scheduler-embed-certs-290628            kube-system
	ddcfa0a7f6936       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           About a minute ago   Running             kube-controller-manager     1                   4d14a397c21aa       kube-controller-manager-embed-certs-290628   kube-system
	5205ca6cd4481       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           About a minute ago   Running             etcd                        1                   e968b91835da8       etcd-embed-certs-290628                      kube-system
	95e8806653f7a       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           About a minute ago   Running             kube-apiserver              1                   f7303fbd39e0c       kube-apiserver-embed-certs-290628            kube-system
	
	
	==> coredns [5ad18c1213ce1ba79c8b25d06d31b054e5c6d7d41fb47e3deaf5b50002f70222] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:34346 - 44887 "HINFO IN 8182947147099260957.3364710826530644747. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01329856s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-290628
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-290628
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=embed-certs-290628
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_44_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:44:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-290628
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:46:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:46:05 +0000   Sat, 10 Jan 2026 02:44:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:46:05 +0000   Sat, 10 Jan 2026 02:44:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:46:05 +0000   Sat, 10 Jan 2026 02:44:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:46:05 +0000   Sat, 10 Jan 2026 02:44:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-290628
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                d1493119-98e9-4ef8-b2ce-67a3672d1963
	  Boot ID:                    41f52236-76b9-4dc8-bfe3-121fbdeb9659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-7d764666f9-jwjfn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-embed-certs-290628                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         117s
	  kube-system                 kindnet-g87jl                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-embed-certs-290628             250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-embed-certs-290628    200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-bdvjd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-embed-certs-290628             100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-7tmmq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-hxqv7          0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  113s  node-controller  Node embed-certs-290628 event: Registered Node embed-certs-290628 in Controller
	  Normal  RegisteredNode  57s   node-controller  Node embed-certs-290628 event: Registered Node embed-certs-290628 in Controller
	
	
	==> dmesg <==
	[Jan10 02:10] overlayfs: idmapped layers are currently not supported
	[Jan10 02:14] overlayfs: idmapped layers are currently not supported
	[ +27.765975] overlayfs: idmapped layers are currently not supported
	[Jan10 02:15] overlayfs: idmapped layers are currently not supported
	[Jan10 02:16] overlayfs: idmapped layers are currently not supported
	[Jan10 02:17] overlayfs: idmapped layers are currently not supported
	[Jan10 02:18] overlayfs: idmapped layers are currently not supported
	[ +23.212409] overlayfs: idmapped layers are currently not supported
	[  +9.866169] overlayfs: idmapped layers are currently not supported
	[Jan10 02:19] overlayfs: idmapped layers are currently not supported
	[Jan10 02:20] overlayfs: idmapped layers are currently not supported
	[ +35.960240] overlayfs: idmapped layers are currently not supported
	[Jan10 02:21] overlayfs: idmapped layers are currently not supported
	[Jan10 02:23] overlayfs: idmapped layers are currently not supported
	[Jan10 02:24] overlayfs: idmapped layers are currently not supported
	[Jan10 02:25] overlayfs: idmapped layers are currently not supported
	[Jan10 02:30] overlayfs: idmapped layers are currently not supported
	[Jan10 02:31] overlayfs: idmapped layers are currently not supported
	[Jan10 02:35] overlayfs: idmapped layers are currently not supported
	[Jan10 02:37] overlayfs: idmapped layers are currently not supported
	[Jan10 02:41] overlayfs: idmapped layers are currently not supported
	[ +37.534021] overlayfs: idmapped layers are currently not supported
	[Jan10 02:43] overlayfs: idmapped layers are currently not supported
	[Jan10 02:44] overlayfs: idmapped layers are currently not supported
	[Jan10 02:45] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5205ca6cd4481f5c3b3669c60cae1e88fadbf6a0dbed071dfc5ed887d68c1eb6] <==
	{"level":"info","ts":"2026-01-10T02:45:31.357738Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:45:31.357747Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:45:31.357938Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:45:31.357949Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:45:31.358936Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T02:45:31.359005Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:45:31.359075Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T02:45:32.115894Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:45:32.116025Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:45:32.116122Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:45:32.116163Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:45:32.116202Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:45:32.121950Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:45:32.122064Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:45:32.122119Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:45:32.122154Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:45:32.124001Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:embed-certs-290628 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:45:32.124253Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:45:32.124406Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:45:32.124545Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:45:32.124582Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:45:32.125538Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:45:32.129817Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T02:45:32.134644Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:45:32.135456Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 02:46:34 up  1:29,  0 user,  load average: 1.04, 1.62, 1.75
	Linux embed-certs-290628 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cf97d9a5c9164ffd276eb59e2c2d5a25f4e245b1724464838763e7794e90f36e] <==
	I0110 02:45:35.042267       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:45:35.042485       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 02:45:35.042598       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:45:35.042611       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:45:35.042620       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:45:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:45:35.227386       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:45:35.227408       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:45:35.227417       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:45:35.227731       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0110 02:46:05.227470       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0110 02:46:05.227472       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0110 02:46:05.227694       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0110 02:46:05.228395       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I0110 02:46:06.728168       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:46:06.728196       1 metrics.go:72] Registering metrics
	I0110 02:46:06.728265       1 controller.go:711] "Syncing nftables rules"
	I0110 02:46:15.226612       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:46:15.226735       1 main.go:301] handling current node
	I0110 02:46:25.227035       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:46:25.227068       1 main.go:301] handling current node
	
	
	==> kube-apiserver [95e8806653f7a62ea44d48caf19c96fe7f8f2f3713d980b9169516d588775029] <==
	I0110 02:45:34.252517       1 aggregator.go:187] initial CRD sync complete...
	I0110 02:45:34.252525       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 02:45:34.252531       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:45:34.252536       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:45:34.260412       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:34.260775       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:34.260808       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0110 02:45:34.260997       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 02:45:34.276082       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 02:45:34.309489       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 02:45:34.312903       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:34.312920       1 policy_source.go:248] refreshing policies
	I0110 02:45:34.351607       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:45:34.368769       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:45:34.880570       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:45:35.378396       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:45:35.526641       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:45:35.591679       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:45:35.623951       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:45:35.775351       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.229.160"}
	I0110 02:45:35.801622       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.154.191"}
	I0110 02:45:37.873594       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:45:37.873642       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:45:37.974855       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:45:38.024100       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ddcfa0a7f69364becc7ff3f5e5d934a47a126035e79baf2f684410b0a13244b8] <==
	I0110 02:45:37.386279       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.386332       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.386434       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 02:45:37.386535       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-290628"
	I0110 02:45:37.386615       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 02:45:37.386680       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.386748       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.386837       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.387033       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.387104       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.387190       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.387243       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.387296       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.390791       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.390963       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.391035       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.391391       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.385287       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:45:37.393395       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.425572       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.488626       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.488745       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:45:37.488759       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:45:37.493609       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:37.985550       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [4e7d5f61e1851fb6bf16a2740f2dac2f735df2977fb762c6d77ec2fd39e8aa7b] <==
	I0110 02:45:35.337295       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:45:35.466597       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:45:35.569625       1 shared_informer.go:377] "Caches are synced"
	I0110 02:45:35.569666       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 02:45:35.569758       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:45:35.687811       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:45:35.690748       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:45:35.720494       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:45:35.721037       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:45:35.721100       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:45:35.740113       1 config.go:200] "Starting service config controller"
	I0110 02:45:35.745641       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:45:35.741297       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:45:35.745701       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:45:35.741742       1 config.go:309] "Starting node config controller"
	I0110 02:45:35.745726       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:45:35.745731       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:45:35.741309       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:45:35.745738       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:45:35.846606       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:45:35.846646       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 02:45:35.856808       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [35fda1be4302381d9f829358ffbd2635d0f9ef32fc82cec42ebc5bb2174d6351] <==
	I0110 02:45:32.229578       1 serving.go:386] Generated self-signed cert in-memory
	W0110 02:45:34.161568       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 02:45:34.161621       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 02:45:34.161632       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 02:45:34.161639       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 02:45:34.274216       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 02:45:34.274251       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:45:34.283123       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 02:45:34.283275       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:45:34.283292       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:45:34.283309       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 02:45:34.388734       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:45:50 embed-certs-290628 kubelet[794]: E0110 02:45:50.380149     794 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-290628" containerName="kube-scheduler"
	Jan 10 02:45:50 embed-certs-290628 kubelet[794]: E0110 02:45:50.383200     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" containerName="dashboard-metrics-scraper"
	Jan 10 02:45:50 embed-certs-290628 kubelet[794]: I0110 02:45:50.383247     794 scope.go:122] "RemoveContainer" containerID="183dab0f7b1b5b7c0e4ec4750b4d61227371e1ee6e2d9327e8a9522c1b5468f5"
	Jan 10 02:45:50 embed-certs-290628 kubelet[794]: E0110 02:45:50.383398     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7tmmq_kubernetes-dashboard(b86ea3de-5b08-4cbd-b9c6-a07f05c7fd98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" podUID="b86ea3de-5b08-4cbd-b9c6-a07f05c7fd98"
	Jan 10 02:45:59 embed-certs-290628 kubelet[794]: E0110 02:45:59.271925     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" containerName="dashboard-metrics-scraper"
	Jan 10 02:45:59 embed-certs-290628 kubelet[794]: I0110 02:45:59.271991     794 scope.go:122] "RemoveContainer" containerID="183dab0f7b1b5b7c0e4ec4750b4d61227371e1ee6e2d9327e8a9522c1b5468f5"
	Jan 10 02:45:59 embed-certs-290628 kubelet[794]: I0110 02:45:59.401068     794 scope.go:122] "RemoveContainer" containerID="183dab0f7b1b5b7c0e4ec4750b4d61227371e1ee6e2d9327e8a9522c1b5468f5"
	Jan 10 02:45:59 embed-certs-290628 kubelet[794]: E0110 02:45:59.403878     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" containerName="dashboard-metrics-scraper"
	Jan 10 02:45:59 embed-certs-290628 kubelet[794]: I0110 02:45:59.404142     794 scope.go:122] "RemoveContainer" containerID="eaf120bff1b3eb57dfad75804087fc1fc5dd7290e8e06b80cf29e676efa2b252"
	Jan 10 02:45:59 embed-certs-290628 kubelet[794]: E0110 02:45:59.404405     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7tmmq_kubernetes-dashboard(b86ea3de-5b08-4cbd-b9c6-a07f05c7fd98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" podUID="b86ea3de-5b08-4cbd-b9c6-a07f05c7fd98"
	Jan 10 02:46:00 embed-certs-290628 kubelet[794]: E0110 02:46:00.406908     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" containerName="dashboard-metrics-scraper"
	Jan 10 02:46:00 embed-certs-290628 kubelet[794]: I0110 02:46:00.406951     794 scope.go:122] "RemoveContainer" containerID="eaf120bff1b3eb57dfad75804087fc1fc5dd7290e8e06b80cf29e676efa2b252"
	Jan 10 02:46:00 embed-certs-290628 kubelet[794]: E0110 02:46:00.407146     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7tmmq_kubernetes-dashboard(b86ea3de-5b08-4cbd-b9c6-a07f05c7fd98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" podUID="b86ea3de-5b08-4cbd-b9c6-a07f05c7fd98"
	Jan 10 02:46:05 embed-certs-290628 kubelet[794]: I0110 02:46:05.420116     794 scope.go:122] "RemoveContainer" containerID="b14c2926b4e747d67085d8e65c5115d40f6a48d7c026c413f61aeaca8a99c96e"
	Jan 10 02:46:16 embed-certs-290628 kubelet[794]: E0110 02:46:16.335869     794 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-jwjfn" containerName="coredns"
	Jan 10 02:46:22 embed-certs-290628 kubelet[794]: E0110 02:46:22.272875     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" containerName="dashboard-metrics-scraper"
	Jan 10 02:46:22 embed-certs-290628 kubelet[794]: I0110 02:46:22.272914     794 scope.go:122] "RemoveContainer" containerID="eaf120bff1b3eb57dfad75804087fc1fc5dd7290e8e06b80cf29e676efa2b252"
	Jan 10 02:46:22 embed-certs-290628 kubelet[794]: I0110 02:46:22.464486     794 scope.go:122] "RemoveContainer" containerID="eaf120bff1b3eb57dfad75804087fc1fc5dd7290e8e06b80cf29e676efa2b252"
	Jan 10 02:46:23 embed-certs-290628 kubelet[794]: E0110 02:46:23.468532     794 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" containerName="dashboard-metrics-scraper"
	Jan 10 02:46:23 embed-certs-290628 kubelet[794]: I0110 02:46:23.468569     794 scope.go:122] "RemoveContainer" containerID="e2653d95359074ec9974d0190d972ff3ef51db7c891e922f11b26f760b02f6e3"
	Jan 10 02:46:23 embed-certs-290628 kubelet[794]: E0110 02:46:23.468721     794 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-7tmmq_kubernetes-dashboard(b86ea3de-5b08-4cbd-b9c6-a07f05c7fd98)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-7tmmq" podUID="b86ea3de-5b08-4cbd-b9c6-a07f05c7fd98"
	Jan 10 02:46:30 embed-certs-290628 kubelet[794]: I0110 02:46:30.160139     794 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 10 02:46:30 embed-certs-290628 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:46:30 embed-certs-290628 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:46:30 embed-certs-290628 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [54cdd96263e164280da6cc533b71f17c8723cb4d97eb22286bfa92df9daa37aa] <==
	2026/01/10 02:45:42 Using namespace: kubernetes-dashboard
	2026/01/10 02:45:42 Using in-cluster config to connect to apiserver
	2026/01/10 02:45:42 Using secret token for csrf signing
	2026/01/10 02:45:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 02:45:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 02:45:42 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 02:45:42 Generating JWE encryption key
	2026/01/10 02:45:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 02:45:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 02:45:43 Initializing JWE encryption key from synchronized object
	2026/01/10 02:45:43 Creating in-cluster Sidecar client
	2026/01/10 02:45:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:45:43 Serving insecurely on HTTP port: 9090
	2026/01/10 02:46:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:45:42 Starting overwatch
	
	
	==> storage-provisioner [0aae3cd379ca33876ea4a9b0c32378c10261c80e2ab472d4eee964f838bb8955] <==
	W0110 02:46:05.488695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:08.944391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:13.205092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:16.803053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:19.856108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:22.877858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:22.882659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:46:22.882813       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:46:22.882970       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-290628_aae469b9-1735-440b-ac1f-2aad80c0dff1!
	I0110 02:46:22.883175       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"07064d36-5187-4459-b216-f8310ec76f12", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-290628_aae469b9-1735-440b-ac1f-2aad80c0dff1 became leader
	W0110 02:46:22.891460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:22.895153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:46:22.983378       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-290628_aae469b9-1735-440b-ac1f-2aad80c0dff1!
	W0110 02:46:24.898049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:24.902338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:26.913917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:26.952989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:28.955607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:28.961085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:30.963700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:30.971214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:32.973660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:32.978984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:34.982864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:46:34.988249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b14c2926b4e747d67085d8e65c5115d40f6a48d7c026c413f61aeaca8a99c96e] <==
	I0110 02:45:35.153917       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 02:46:05.167211       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-290628 -n embed-certs-290628
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-290628 -n embed-certs-290628: exit status 2 (344.256111ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-290628 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-676905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-676905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (247.826791ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:47:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-676905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-676905 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-676905 describe deploy/metrics-server -n kube-system: exit status 1 (76.941871ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-676905 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-676905
helpers_test.go:244: (dbg) docker inspect no-preload-676905:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49",
	        "Created": "2026-01-10T02:46:39.759659544Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213266,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:46:39.820238519Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49/hostname",
	        "HostsPath": "/var/lib/docker/containers/edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49/hosts",
	        "LogPath": "/var/lib/docker/containers/edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49/edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49-json.log",
	        "Name": "/no-preload-676905",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-676905:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-676905",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49",
	                "LowerDir": "/var/lib/docker/overlay2/fc94be4f76220c0d09795aa60f43389a06dd04e0a5955af0531cf368de6efc6b-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc94be4f76220c0d09795aa60f43389a06dd04e0a5955af0531cf368de6efc6b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc94be4f76220c0d09795aa60f43389a06dd04e0a5955af0531cf368de6efc6b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc94be4f76220c0d09795aa60f43389a06dd04e0a5955af0531cf368de6efc6b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-676905",
	                "Source": "/var/lib/docker/volumes/no-preload-676905/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-676905",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-676905",
	                "name.minikube.sigs.k8s.io": "no-preload-676905",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6398810be534c2b1d16bbfb217f996bdc19f685db52c1590ebf90b184d4e434e",
	            "SandboxKey": "/var/run/docker/netns/6398810be534",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-676905": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:a6:91:88:6a:67",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "146cb46c14407056c3f694e77394bc66aacde5ff5ac19837f4400799ed6e0ce7",
	                    "EndpointID": "ef43122644e3ebf5e724ad9c316e1808472f3865851b80c2ba7ea10ac6be441e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-676905",
	                        "edb3b90bff05"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-676905 -n no-preload-676905
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-676905 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-676905 logs -n 25: (1.144777556s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-options-295914 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-295914          │ jenkins │ v1.37.0 │ 10 Jan 26 02:40 UTC │ 10 Jan 26 02:41 UTC │
	│ ssh     │ cert-options-295914 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-295914          │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ ssh     │ -p cert-options-295914 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-295914          │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ delete  │ -p cert-options-295914                                                                                                                                                                                                                        │ cert-options-295914          │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:41 UTC │
	│ start   │ -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:41 UTC │ 10 Jan 26 02:42 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-736081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │                     │
	│ stop    │ -p old-k8s-version-736081 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-736081 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:42 UTC │
	│ start   │ -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:43 UTC │
	│ image   │ old-k8s-version-736081 image list --format=json                                                                                                                                                                                               │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ pause   │ -p old-k8s-version-736081 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │                     │
	│ delete  │ -p old-k8s-version-736081                                                                                                                                                                                                                     │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ delete  │ -p old-k8s-version-736081                                                                                                                                                                                                                     │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ start   │ -p embed-certs-290628 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ addons  │ enable metrics-server -p embed-certs-290628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │                     │
	│ stop    │ -p embed-certs-290628 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:45 UTC │
	│ addons  │ enable dashboard -p embed-certs-290628 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:45 UTC │
	│ start   │ -p embed-certs-290628 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:46 UTC │
	│ image   │ embed-certs-290628 image list --format=json                                                                                                                                                                                                   │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ pause   │ -p embed-certs-290628 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │                     │
	│ delete  │ -p embed-certs-290628                                                                                                                                                                                                                         │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ delete  │ -p embed-certs-290628                                                                                                                                                                                                                         │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ delete  │ -p disable-driver-mounts-990753                                                                                                                                                                                                               │ disable-driver-mounts-990753 │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ start   │ -p no-preload-676905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:47 UTC │
	│ addons  │ enable metrics-server -p no-preload-676905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:46:38
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:46:38.774377  212953 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:46:38.774685  212953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:46:38.774719  212953 out.go:374] Setting ErrFile to fd 2...
	I0110 02:46:38.774739  212953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:46:38.775013  212953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:46:38.775471  212953 out.go:368] Setting JSON to false
	I0110 02:46:38.776349  212953 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5348,"bootTime":1768007851,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:46:38.776443  212953 start.go:143] virtualization:  
	I0110 02:46:38.780390  212953 out.go:179] * [no-preload-676905] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:46:38.783772  212953 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:46:38.783867  212953 notify.go:221] Checking for updates...
	I0110 02:46:38.790211  212953 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:46:38.793297  212953 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:46:38.796399  212953 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:46:38.799429  212953 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:46:38.802404  212953 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:46:38.805843  212953 config.go:182] Loaded profile config "force-systemd-flag-038359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:46:38.805951  212953 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:46:38.837380  212953 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:46:38.837497  212953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:46:38.894148  212953 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:46:38.884875027 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:46:38.894253  212953 docker.go:319] overlay module found
	I0110 02:46:38.897596  212953 out.go:179] * Using the docker driver based on user configuration
	I0110 02:46:38.900574  212953 start.go:309] selected driver: docker
	I0110 02:46:38.900589  212953 start.go:928] validating driver "docker" against <nil>
	I0110 02:46:38.900604  212953 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:46:38.901307  212953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:46:38.956355  212953 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:46:38.946907345 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:46:38.956508  212953 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 02:46:38.956723  212953 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:46:38.959742  212953 out.go:179] * Using Docker driver with root privileges
	I0110 02:46:38.962858  212953 cni.go:84] Creating CNI manager for ""
	I0110 02:46:38.962924  212953 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:46:38.962942  212953 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 02:46:38.963021  212953 start.go:353] cluster config:
	{Name:no-preload-676905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-676905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:46:38.968043  212953 out.go:179] * Starting "no-preload-676905" primary control-plane node in "no-preload-676905" cluster
	I0110 02:46:38.971021  212953 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:46:38.973955  212953 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:46:38.976862  212953 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:46:38.977000  212953 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/config.json ...
	I0110 02:46:38.977049  212953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/config.json: {Name:mkcd229f1a704fcef1f97d8ac79b10ef0bd09109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:46:38.977244  212953 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:46:38.977526  212953 cache.go:107] acquiring lock: {Name:mkdf2b70dc3bfb0100a8d957c112ff6d60b533f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:46:38.977589  212953 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0110 02:46:38.977598  212953 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 80.696µs
	I0110 02:46:38.977609  212953 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0110 02:46:38.977619  212953 cache.go:107] acquiring lock: {Name:mked65ab4ffae9cf085f87a9b484648d81831c67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:46:38.977648  212953 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I0110 02:46:38.977653  212953 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 35.716µs
	I0110 02:46:38.977659  212953 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I0110 02:46:38.977668  212953 cache.go:107] acquiring lock: {Name:mkd95889d95a369bd71dc1a2761730b686349d74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:46:38.977699  212953 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I0110 02:46:38.977704  212953 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 36.75µs
	I0110 02:46:38.977709  212953 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I0110 02:46:38.977718  212953 cache.go:107] acquiring lock: {Name:mk308c14dc1f570c027c3dfa4b755b4007e7f2d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:46:38.977745  212953 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I0110 02:46:38.977750  212953 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 33.066µs
	I0110 02:46:38.977756  212953 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I0110 02:46:38.977764  212953 cache.go:107] acquiring lock: {Name:mk335c7d6e6cec745da4e01893ab73b038bcc37b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:46:38.977788  212953 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I0110 02:46:38.977798  212953 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 31.982µs
	I0110 02:46:38.977804  212953 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I0110 02:46:38.977812  212953 cache.go:107] acquiring lock: {Name:mk712a03fba9f53486bb85d78a3ef35c15cedfe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:46:38.977849  212953 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I0110 02:46:38.977855  212953 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 43.757µs
	I0110 02:46:38.977860  212953 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I0110 02:46:38.977868  212953 cache.go:107] acquiring lock: {Name:mk8489c7600ecf98e77b2d0fd473a4d98a759726 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:46:38.977892  212953 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I0110 02:46:38.977896  212953 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 28.996µs
	I0110 02:46:38.977901  212953 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I0110 02:46:38.977909  212953 cache.go:107] acquiring lock: {Name:mk321022d40fb1eff3edb501792389e1ccf9fc85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:46:38.977936  212953 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I0110 02:46:38.977941  212953 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 33.361µs
	I0110 02:46:38.977947  212953 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I0110 02:46:38.977952  212953 cache.go:87] Successfully saved all images to host disk.
	I0110 02:46:38.996348  212953 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:46:38.996370  212953 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:46:38.996389  212953 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:46:38.996417  212953 start.go:360] acquireMachinesLock for no-preload-676905: {Name:mk2632012d0afb769f32ccada6003bc8dbc8f0e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:46:38.996531  212953 start.go:364] duration metric: took 94.324µs to acquireMachinesLock for "no-preload-676905"
	I0110 02:46:38.996558  212953 start.go:93] Provisioning new machine with config: &{Name:no-preload-676905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-676905 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:46:38.996636  212953 start.go:125] createHost starting for "" (driver="docker")
	I0110 02:46:39.001924  212953 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:46:39.002177  212953 start.go:159] libmachine.API.Create for "no-preload-676905" (driver="docker")
	I0110 02:46:39.002215  212953 client.go:173] LocalClient.Create starting
	I0110 02:46:39.002307  212953 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem
	I0110 02:46:39.002347  212953 main.go:144] libmachine: Decoding PEM data...
	I0110 02:46:39.002367  212953 main.go:144] libmachine: Parsing certificate...
	I0110 02:46:39.002417  212953 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem
	I0110 02:46:39.002438  212953 main.go:144] libmachine: Decoding PEM data...
	I0110 02:46:39.002459  212953 main.go:144] libmachine: Parsing certificate...
	I0110 02:46:39.002845  212953 cli_runner.go:164] Run: docker network inspect no-preload-676905 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:46:39.020214  212953 cli_runner.go:211] docker network inspect no-preload-676905 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:46:39.020317  212953 network_create.go:284] running [docker network inspect no-preload-676905] to gather additional debugging logs...
	I0110 02:46:39.020339  212953 cli_runner.go:164] Run: docker network inspect no-preload-676905
	W0110 02:46:39.037396  212953 cli_runner.go:211] docker network inspect no-preload-676905 returned with exit code 1
	I0110 02:46:39.037445  212953 network_create.go:287] error running [docker network inspect no-preload-676905]: docker network inspect no-preload-676905: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-676905 not found
	I0110 02:46:39.037474  212953 network_create.go:289] output of [docker network inspect no-preload-676905]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-676905 not found
	
	** /stderr **
	I0110 02:46:39.037596  212953 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:46:39.054724  212953 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dba69832168e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:28:cf:b8:5c:6c} reservation:<nil>}
	I0110 02:46:39.055054  212953 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1bcca9209747 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d2:8b:83:a4:43:ae} reservation:<nil>}
	I0110 02:46:39.055410  212953 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-383ddfacc8f8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:26:ff:fb:97:66:af} reservation:<nil>}
	I0110 02:46:39.055874  212953 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a846c0}
	I0110 02:46:39.055904  212953 network_create.go:124] attempt to create docker network no-preload-676905 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0110 02:46:39.055965  212953 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-676905 no-preload-676905
	I0110 02:46:39.119871  212953 network_create.go:108] docker network no-preload-676905 192.168.76.0/24 created
	I0110 02:46:39.119904  212953 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-676905" container
	I0110 02:46:39.119987  212953 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:46:39.136638  212953 cli_runner.go:164] Run: docker volume create no-preload-676905 --label name.minikube.sigs.k8s.io=no-preload-676905 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:46:39.154512  212953 oci.go:103] Successfully created a docker volume no-preload-676905
	I0110 02:46:39.154613  212953 cli_runner.go:164] Run: docker run --rm --name no-preload-676905-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-676905 --entrypoint /usr/bin/test -v no-preload-676905:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:46:39.685015  212953 oci.go:107] Successfully prepared a docker volume no-preload-676905
	I0110 02:46:39.685085  212953 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	W0110 02:46:39.685222  212953 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 02:46:39.685331  212953 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:46:39.744876  212953 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-676905 --name no-preload-676905 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-676905 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-676905 --network no-preload-676905 --ip 192.168.76.2 --volume no-preload-676905:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 02:46:40.083158  212953 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Running}}
	I0110 02:46:40.102825  212953 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Status}}
	I0110 02:46:40.128268  212953 cli_runner.go:164] Run: docker exec no-preload-676905 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:46:40.188317  212953 oci.go:144] the created container "no-preload-676905" has a running status.
	I0110 02:46:40.188357  212953 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa...
	I0110 02:46:40.276622  212953 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:46:40.299937  212953 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Status}}
	I0110 02:46:40.321615  212953 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:46:40.321634  212953 kic_runner.go:114] Args: [docker exec --privileged no-preload-676905 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:46:40.371546  212953 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Status}}
	I0110 02:46:40.392552  212953 machine.go:94] provisionDockerMachine start ...
	I0110 02:46:40.392662  212953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:46:40.415265  212953 main.go:144] libmachine: Using SSH client type: native
	I0110 02:46:40.415961  212953 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0110 02:46:40.415977  212953 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:46:40.416575  212953 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37724->127.0.0.1:33068: read: connection reset by peer
	I0110 02:46:43.567477  212953 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-676905
	
	I0110 02:46:43.567499  212953 ubuntu.go:182] provisioning hostname "no-preload-676905"
	I0110 02:46:43.567573  212953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:46:43.584788  212953 main.go:144] libmachine: Using SSH client type: native
	I0110 02:46:43.585095  212953 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0110 02:46:43.585110  212953 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-676905 && echo "no-preload-676905" | sudo tee /etc/hostname
	I0110 02:46:43.744164  212953 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-676905
	
	I0110 02:46:43.744240  212953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:46:43.762582  212953 main.go:144] libmachine: Using SSH client type: native
	I0110 02:46:43.762892  212953 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0110 02:46:43.762912  212953 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-676905' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-676905/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-676905' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:46:43.908489  212953 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:46:43.908518  212953 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 02:46:43.908605  212953 ubuntu.go:190] setting up certificates
	I0110 02:46:43.908623  212953 provision.go:84] configureAuth start
	I0110 02:46:43.908708  212953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-676905
	I0110 02:46:43.937462  212953 provision.go:143] copyHostCerts
	I0110 02:46:43.937531  212953 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem, removing ...
	I0110 02:46:43.937540  212953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:46:43.937620  212953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 02:46:43.937713  212953 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem, removing ...
	I0110 02:46:43.937719  212953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:46:43.937744  212953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 02:46:43.937798  212953 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem, removing ...
	I0110 02:46:43.937802  212953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:46:43.937824  212953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 02:46:43.937871  212953 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.no-preload-676905 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-676905]
	I0110 02:46:44.091414  212953 provision.go:177] copyRemoteCerts
	I0110 02:46:44.091513  212953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:46:44.091573  212953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:46:44.108173  212953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:46:44.211155  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:46:44.227484  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 02:46:44.243976  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 02:46:44.260412  212953 provision.go:87] duration metric: took 351.74894ms to configureAuth
	I0110 02:46:44.260437  212953 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:46:44.260647  212953 config.go:182] Loaded profile config "no-preload-676905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:46:44.260754  212953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:46:44.277819  212953 main.go:144] libmachine: Using SSH client type: native
	I0110 02:46:44.278147  212953 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0110 02:46:44.278167  212953 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:46:44.566804  212953 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:46:44.566843  212953 machine.go:97] duration metric: took 4.174266874s to provisionDockerMachine
	I0110 02:46:44.566854  212953 client.go:176] duration metric: took 5.564627034s to LocalClient.Create
	I0110 02:46:44.566868  212953 start.go:167] duration metric: took 5.564691984s to libmachine.API.Create "no-preload-676905"
	I0110 02:46:44.566875  212953 start.go:293] postStartSetup for "no-preload-676905" (driver="docker")
	I0110 02:46:44.566885  212953 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:46:44.566952  212953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:46:44.567000  212953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:46:44.584111  212953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:46:44.688764  212953 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:46:44.692695  212953 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:46:44.692721  212953 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:46:44.692732  212953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:46:44.692782  212953 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:46:44.692858  212953 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:46:44.692961  212953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:46:44.701153  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:46:44.721259  212953 start.go:296] duration metric: took 154.369815ms for postStartSetup
	I0110 02:46:44.721622  212953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-676905
	I0110 02:46:44.743917  212953 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/config.json ...
	I0110 02:46:44.744200  212953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:46:44.744262  212953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:46:44.761611  212953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:46:44.860913  212953 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:46:44.865455  212953 start.go:128] duration metric: took 5.868805451s to createHost
	I0110 02:46:44.865484  212953 start.go:83] releasing machines lock for "no-preload-676905", held for 5.868941275s
	I0110 02:46:44.865552  212953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-676905
	I0110 02:46:44.882276  212953 ssh_runner.go:195] Run: cat /version.json
	I0110 02:46:44.882329  212953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:46:44.882595  212953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:46:44.882657  212953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:46:44.905713  212953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:46:44.910235  212953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:46:45.003279  212953 ssh_runner.go:195] Run: systemctl --version
	I0110 02:46:45.129892  212953 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:46:45.188713  212953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:46:45.210548  212953 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:46:45.210652  212953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:46:45.265580  212953 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 02:46:45.265661  212953 start.go:496] detecting cgroup driver to use...
	I0110 02:46:45.265760  212953 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:46:45.265834  212953 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:46:45.288781  212953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:46:45.304030  212953 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:46:45.304127  212953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:46:45.323054  212953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:46:45.346618  212953 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:46:45.476556  212953 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:46:45.617032  212953 docker.go:234] disabling docker service ...
	I0110 02:46:45.617100  212953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:46:45.638387  212953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:46:45.651127  212953 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:46:45.768551  212953 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:46:45.884690  212953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:46:45.897474  212953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:46:45.910773  212953 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:46:45.910876  212953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:46:45.919484  212953 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:46:45.919587  212953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:46:45.928147  212953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:46:45.936356  212953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:46:45.944559  212953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:46:45.952461  212953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:46:45.960785  212953 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:46:45.974138  212953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:46:45.982885  212953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:46:45.990329  212953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:46:45.997534  212953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:46:46.110151  212953 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:46:46.282709  212953 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:46:46.282828  212953 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:46:46.286578  212953 start.go:574] Will wait 60s for crictl version
	I0110 02:46:46.286653  212953 ssh_runner.go:195] Run: which crictl
	I0110 02:46:46.290175  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:46:46.318039  212953 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:46:46.318153  212953 ssh_runner.go:195] Run: crio --version
	I0110 02:46:46.344982  212953 ssh_runner.go:195] Run: crio --version
	I0110 02:46:46.379093  212953 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:46:46.381947  212953 cli_runner.go:164] Run: docker network inspect no-preload-676905 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:46:46.397783  212953 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:46:46.401447  212953 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:46:46.410469  212953 kubeadm.go:884] updating cluster {Name:no-preload-676905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-676905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:46:46.410574  212953 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:46:46.410613  212953 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:46:46.434172  212953 crio.go:557] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0". assuming images are not preloaded.
	I0110 02:46:46.434195  212953 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0 registry.k8s.io/kube-controller-manager:v1.35.0 registry.k8s.io/kube-scheduler:v1.35.0 registry.k8s.io/kube-proxy:v1.35.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0110 02:46:46.434251  212953 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:46:46.434275  212953 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0
	I0110 02:46:46.434439  212953 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0
	I0110 02:46:46.434459  212953 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I0110 02:46:46.434530  212953 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0
	I0110 02:46:46.434541  212953 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I0110 02:46:46.434613  212953 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0
	I0110 02:46:46.434440  212953 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I0110 02:46:46.435902  212953 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I0110 02:46:46.436300  212953 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:46:46.436505  212953 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0
	I0110 02:46:46.436612  212953 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I0110 02:46:46.436647  212953 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0
	I0110 02:46:46.436792  212953 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0
	I0110 02:46:46.436812  212953 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0
	I0110 02:46:46.436910  212953 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I0110 02:46:46.873693  212953 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I0110 02:46:46.882629  212953 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I0110 02:46:46.883135  212953 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0
	I0110 02:46:46.900105  212953 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0
	I0110 02:46:46.907497  212953 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0
	I0110 02:46:46.933363  212953 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I0110 02:46:46.933417  212953 cri.go:226] Removing image: registry.k8s.io/pause:3.10.1
	I0110 02:46:46.933484  212953 ssh_runner.go:195] Run: which crictl
	I0110 02:46:46.963509  212953 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0
	I0110 02:46:46.984517  212953 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I0110 02:46:46.992801  212953 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0" does not exist at hash "de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5" in container runtime
	I0110 02:46:46.992882  212953 cri.go:226] Removing image: registry.k8s.io/kube-proxy:v1.35.0
	I0110 02:46:46.992973  212953 ssh_runner.go:195] Run: which crictl
	I0110 02:46:46.993033  212953 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf" in container runtime
	I0110 02:46:46.993151  212953 cri.go:226] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I0110 02:46:46.993192  212953 ssh_runner.go:195] Run: which crictl
	I0110 02:46:47.049972  212953 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0" does not exist at hash "88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0" in container runtime
	I0110 02:46:47.050090  212953 cri.go:226] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0
	I0110 02:46:47.050166  212953 ssh_runner.go:195] Run: which crictl
	I0110 02:46:47.075724  212953 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0" does not exist at hash "c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856" in container runtime
	I0110 02:46:47.075768  212953 cri.go:226] Removing image: registry.k8s.io/kube-apiserver:v1.35.0
	I0110 02:46:47.075899  212953 ssh_runner.go:195] Run: which crictl
	I0110 02:46:47.075975  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I0110 02:46:47.076029  212953 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0" does not exist at hash "ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f" in container runtime
	I0110 02:46:47.076051  212953 cri.go:226] Removing image: registry.k8s.io/kube-scheduler:v1.35.0
	I0110 02:46:47.076076  212953 ssh_runner.go:195] Run: which crictl
	I0110 02:46:47.076122  212953 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57" in container runtime
	I0110 02:46:47.076142  212953 cri.go:226] Removing image: registry.k8s.io/etcd:3.6.6-0
	I0110 02:46:47.076163  212953 ssh_runner.go:195] Run: which crictl
	I0110 02:46:47.076217  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I0110 02:46:47.076289  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I0110 02:46:47.076315  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I0110 02:46:47.138213  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I0110 02:46:47.138299  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I0110 02:46:47.138380  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I0110 02:46:47.142419  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I0110 02:46:47.142484  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I0110 02:46:47.142557  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I0110 02:46:47.142603  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I0110 02:46:47.228655  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0
	I0110 02:46:47.228753  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I0110 02:46:47.228834  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I0110 02:46:47.255530  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I0110 02:46:47.255632  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I0110 02:46:47.255693  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I0110 02:46:47.255722  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0
	I0110 02:46:47.331827  212953 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I0110 02:46:47.331962  212953 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0
	I0110 02:46:47.332005  212953 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I0110 02:46:47.332053  212953 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I0110 02:46:47.332125  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0
	I0110 02:46:47.361740  212953 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0
	I0110 02:46:47.361847  212953 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0
	I0110 02:46:47.361917  212953 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1
	I0110 02:46:47.361979  212953 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I0110 02:46:47.362064  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0
	I0110 02:46:47.362136  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I0110 02:46:47.385537  212953 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0
	I0110 02:46:47.385636  212953 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0
	I0110 02:46:47.385696  212953 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I0110 02:46:47.385714  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I0110 02:46:47.385758  212953 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0': No such file or directory
	I0110 02:46:47.385772  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0 (20682752 bytes)
	I0110 02:46:47.440336  212953 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0
	I0110 02:46:47.440413  212953 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0
	I0110 02:46:47.440527  212953 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0
	I0110 02:46:47.440527  212953 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I0110 02:46:47.440387  212953 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I0110 02:46:47.440615  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (21178368 bytes)
	I0110 02:46:47.440362  212953 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0': No such file or directory
	I0110 02:46:47.440658  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0 (22434816 bytes)
	I0110 02:46:47.440692  212953 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0': No such file or directory
	I0110 02:46:47.440733  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0 (24702976 bytes)
	I0110 02:46:47.492248  212953 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I0110 02:46:47.492317  212953 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I0110 02:46:47.507056  212953 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0': No such file or directory
	I0110 02:46:47.507098  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0 (15415808 bytes)
	I0110 02:46:47.507140  212953 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I0110 02:46:47.507155  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (21761024 bytes)
	W0110 02:46:47.693408  212953 image.go:328] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0110 02:46:47.693581  212953 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:46:47.977109  212953 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0110 02:46:47.977197  212953 cri.go:226] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:46:47.977288  212953 ssh_runner.go:195] Run: which crictl
	I0110 02:46:47.977346  212953 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I0110 02:46:48.061463  212953 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0
	I0110 02:46:48.061849  212953 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0
	I0110 02:46:48.086023  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:46:49.619033  212953 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0: (1.557134737s)
	I0110 02:46:49.619059  212953 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 from cache
	I0110 02:46:49.619068  212953 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.532958873s)
	I0110 02:46:49.619143  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:46:49.619076  212953 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I0110 02:46:49.619235  212953 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0
	I0110 02:46:49.647730  212953 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:46:50.738205  212953 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0: (1.118949466s)
	I0110 02:46:50.738239  212953 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 from cache
	I0110 02:46:50.738256  212953 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I0110 02:46:50.738303  212953 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I0110 02:46:50.738373  212953 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.090619243s)
	I0110 02:46:50.738401  212953 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0110 02:46:50.738471  212953 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0110 02:46:51.924921  212953 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.186425123s)
	I0110 02:46:51.924951  212953 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0110 02:46:51.924978  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0110 02:46:51.925112  212953 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.186792054s)
	I0110 02:46:51.925123  212953 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I0110 02:46:51.925140  212953 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0
	I0110 02:46:51.925178  212953 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0
	I0110 02:46:53.268509  212953 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0: (1.343312073s)
	I0110 02:46:53.268533  212953 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 from cache
	I0110 02:46:53.268550  212953 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I0110 02:46:53.268596  212953 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I0110 02:46:55.011550  212953 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (1.742929516s)
	I0110 02:46:55.011587  212953 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 from cache
	I0110 02:46:55.011614  212953 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0
	I0110 02:46:55.011681  212953 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0
	I0110 02:46:56.371011  212953 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0: (1.359303197s)
	I0110 02:46:56.371037  212953 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 from cache
	I0110 02:46:56.371054  212953 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0110 02:46:56.371103  212953 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0110 02:46:56.918137  212953 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0110 02:46:56.918180  212953 cache_images.go:125] Successfully loaded all cached images
	I0110 02:46:56.918187  212953 cache_images.go:94] duration metric: took 10.483979376s to LoadCachedImages
	I0110 02:46:56.918199  212953 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 02:46:56.918287  212953 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-676905 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-676905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:46:56.918365  212953 ssh_runner.go:195] Run: crio config
	I0110 02:46:56.986718  212953 cni.go:84] Creating CNI manager for ""
	I0110 02:46:56.986784  212953 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:46:56.986818  212953 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:46:56.986871  212953 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-676905 NodeName:no-preload-676905 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:46:56.987033  212953 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-676905"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:46:56.987124  212953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:46:56.994738  212953 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0': No such file or directory
	
	Initiating transfer...
	I0110 02:46:56.994825  212953 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0
	I0110 02:46:57.002178  212953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
	I0110 02:46:57.002276  212953 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl
	I0110 02:46:57.002362  212953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubelet.sha256
	I0110 02:46:57.002396  212953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:46:57.002485  212953 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubeadm.sha256
	I0110 02:46:57.002539  212953 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm
	I0110 02:46:57.011524  212953 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubectl': No such file or directory
	I0110 02:46:57.011613  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/cache/linux/arm64/v1.35.0/kubectl --> /var/lib/minikube/binaries/v1.35.0/kubectl (55247032 bytes)
	I0110 02:46:57.024874  212953 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubeadm': No such file or directory
	I0110 02:46:57.024957  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/cache/linux/arm64/v1.35.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0/kubeadm (68354232 bytes)
	I0110 02:46:57.024902  212953 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet
	I0110 02:46:57.046805  212953 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0/kubelet': No such file or directory
	I0110 02:46:57.046847  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/cache/linux/arm64/v1.35.0/kubelet --> /var/lib/minikube/binaries/v1.35.0/kubelet (54329636 bytes)
	I0110 02:46:57.838727  212953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:46:57.847518  212953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 02:46:57.861948  212953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:46:57.875360  212953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I0110 02:46:57.888593  212953 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:46:57.892489  212953 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:46:57.902225  212953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:46:58.007987  212953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:46:58.025013  212953 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905 for IP: 192.168.76.2
	I0110 02:46:58.025077  212953 certs.go:195] generating shared ca certs ...
	I0110 02:46:58.025116  212953 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:46:58.025277  212953 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:46:58.025355  212953 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:46:58.025378  212953 certs.go:257] generating profile certs ...
	I0110 02:46:58.025487  212953 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.key
	I0110 02:46:58.025525  212953 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.crt with IP's: []
	I0110 02:46:58.208559  212953 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.crt ...
	I0110 02:46:58.208628  212953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.crt: {Name:mkd28cf667e6c5b7f4b35f86d7035f5ebb7ee089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:46:58.208851  212953 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.key ...
	I0110 02:46:58.208887  212953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.key: {Name:mk51b9247b2c761f340d8748ab788713dee6e9fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:46:58.209020  212953 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/apiserver.key.9031fc60
	I0110 02:46:58.209059  212953 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/apiserver.crt.9031fc60 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 02:46:58.297077  212953 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/apiserver.crt.9031fc60 ...
	I0110 02:46:58.297107  212953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/apiserver.crt.9031fc60: {Name:mk1f3962ea00945624eac947644f72a25431434c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:46:58.297286  212953 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/apiserver.key.9031fc60 ...
	I0110 02:46:58.297301  212953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/apiserver.key.9031fc60: {Name:mkeaf8d984520b818b015feeae3c1314af18214a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:46:58.297385  212953 certs.go:382] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/apiserver.crt.9031fc60 -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/apiserver.crt
	I0110 02:46:58.297465  212953 certs.go:386] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/apiserver.key.9031fc60 -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/apiserver.key
	I0110 02:46:58.297524  212953 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/proxy-client.key
	I0110 02:46:58.297542  212953 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/proxy-client.crt with IP's: []
	I0110 02:46:58.379387  212953 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/proxy-client.crt ...
	I0110 02:46:58.379416  212953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/proxy-client.crt: {Name:mk3f6ed76db3181f3136e95bd25173c6d440a569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:46:58.379599  212953 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/proxy-client.key ...
	I0110 02:46:58.379614  212953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/proxy-client.key: {Name:mkb6535e79f44d51106d96518405ace9910941b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:46:58.379819  212953 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:46:58.379861  212953 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:46:58.379883  212953 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:46:58.379911  212953 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:46:58.379939  212953 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:46:58.379968  212953 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:46:58.380018  212953 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:46:58.380573  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:46:58.399610  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:46:58.418817  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:46:58.437084  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:46:58.454939  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 02:46:58.472554  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 02:46:58.490260  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:46:58.508770  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:46:58.526143  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:46:58.543368  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:46:58.561810  212953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:46:58.578589  212953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:46:58.591087  212953 ssh_runner.go:195] Run: openssl version
	I0110 02:46:58.597514  212953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:46:58.604976  212953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:46:58.612487  212953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:46:58.616526  212953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:46:58.616592  212953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:46:58.657470  212953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:46:58.665218  212953 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41682.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:46:58.672888  212953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:46:58.680268  212953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:46:58.687910  212953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:46:58.691669  212953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:46:58.691771  212953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:46:58.732454  212953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:46:58.740097  212953 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:46:58.747297  212953 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:46:58.754756  212953 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:46:58.762095  212953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:46:58.765886  212953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:46:58.765953  212953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:46:58.806988  212953 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:46:58.814554  212953 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4168.pem /etc/ssl/certs/51391683.0
	I0110 02:46:58.822092  212953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:46:58.826101  212953 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:46:58.826176  212953 kubeadm.go:401] StartCluster: {Name:no-preload-676905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-676905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:46:58.826261  212953 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:46:58.826319  212953 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:46:58.852441  212953 cri.go:96] found id: ""
	I0110 02:46:58.852579  212953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:46:58.860570  212953 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:46:58.868355  212953 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:46:58.868425  212953 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:46:58.876532  212953 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:46:58.876553  212953 kubeadm.go:158] found existing configuration files:
	
	I0110 02:46:58.876602  212953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:46:58.884383  212953 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:46:58.884449  212953 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:46:58.891733  212953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:46:58.899557  212953 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:46:58.899674  212953 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:46:58.907506  212953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:46:58.925544  212953 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:46:58.925658  212953 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:46:58.935354  212953 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:46:58.944949  212953 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:46:58.945040  212953 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:46:58.960977  212953 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:46:59.028663  212953 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:46:59.029036  212953 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:46:59.098699  212953 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:46:59.098814  212953 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:46:59.098871  212953 kubeadm.go:319] OS: Linux
	I0110 02:46:59.098965  212953 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:46:59.099045  212953 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:46:59.099097  212953 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:46:59.099179  212953 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:46:59.099232  212953 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:46:59.099297  212953 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:46:59.099353  212953 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:46:59.099412  212953 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:46:59.099463  212953 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:46:59.163910  212953 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:46:59.164088  212953 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:46:59.164226  212953 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:46:59.184177  212953 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:46:59.191476  212953 out.go:252]   - Generating certificates and keys ...
	I0110 02:46:59.191577  212953 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:46:59.191643  212953 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:46:59.435931  212953 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:46:59.742788  212953 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:46:59.795326  212953 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 02:47:00.174301  212953 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 02:47:00.345252  212953 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 02:47:00.345481  212953 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-676905] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:47:00.486299  212953 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 02:47:00.486492  212953 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-676905] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:47:00.865457  212953 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 02:47:00.916068  212953 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 02:47:01.057964  212953 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 02:47:01.058280  212953 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:47:02.054147  212953 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:47:02.720233  212953 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:47:03.073545  212953 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:47:03.588737  212953 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:47:03.888221  212953 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:47:03.889020  212953 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:47:03.892402  212953 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:47:03.915934  212953 out.go:252]   - Booting up control plane ...
	I0110 02:47:03.916048  212953 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:47:03.916148  212953 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:47:03.916228  212953 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:47:03.916336  212953 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:47:03.916435  212953 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:47:03.919278  212953 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:47:03.919769  212953 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:47:03.920062  212953 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:47:04.050022  212953 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:47:04.050209  212953 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:47:05.051645  212953 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001758062s
	I0110 02:47:05.055434  212953 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 02:47:05.055545  212953 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0110 02:47:05.055648  212953 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 02:47:05.055744  212953 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0110 02:47:06.066039  212953 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.010123017s
	I0110 02:47:07.569956  212953 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.51454393s
	I0110 02:47:09.556852  212953 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501287737s
	I0110 02:47:09.595402  212953 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 02:47:09.611531  212953 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 02:47:09.628327  212953 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 02:47:09.628597  212953 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-676905 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 02:47:09.641534  212953 kubeadm.go:319] [bootstrap-token] Using token: 2kocpi.k4hg3it1aauvyu37
	I0110 02:47:09.644513  212953 out.go:252]   - Configuring RBAC rules ...
	I0110 02:47:09.644646  212953 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 02:47:09.648798  212953 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 02:47:09.662722  212953 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 02:47:09.667208  212953 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 02:47:09.674509  212953 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 02:47:09.685993  212953 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 02:47:09.963545  212953 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 02:47:10.402282  212953 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 02:47:10.965772  212953 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 02:47:10.967276  212953 kubeadm.go:319] 
	I0110 02:47:10.967350  212953 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 02:47:10.967355  212953 kubeadm.go:319] 
	I0110 02:47:10.967433  212953 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 02:47:10.967437  212953 kubeadm.go:319] 
	I0110 02:47:10.967462  212953 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 02:47:10.967521  212953 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 02:47:10.967571  212953 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 02:47:10.967575  212953 kubeadm.go:319] 
	I0110 02:47:10.967628  212953 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 02:47:10.967632  212953 kubeadm.go:319] 
	I0110 02:47:10.967680  212953 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 02:47:10.967683  212953 kubeadm.go:319] 
	I0110 02:47:10.967736  212953 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 02:47:10.967835  212953 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 02:47:10.967905  212953 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 02:47:10.967909  212953 kubeadm.go:319] 
	I0110 02:47:10.967993  212953 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 02:47:10.968077  212953 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 02:47:10.968081  212953 kubeadm.go:319] 
	I0110 02:47:10.968165  212953 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2kocpi.k4hg3it1aauvyu37 \
	I0110 02:47:10.968267  212953 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e531c0ad449fbe8263f118458e677222d5af5abf33ad8c77cbc1152855f23f5 \
	I0110 02:47:10.968288  212953 kubeadm.go:319] 	--control-plane 
	I0110 02:47:10.968291  212953 kubeadm.go:319] 
	I0110 02:47:10.968376  212953 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 02:47:10.968380  212953 kubeadm.go:319] 
	I0110 02:47:10.968462  212953 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2kocpi.k4hg3it1aauvyu37 \
	I0110 02:47:10.968564  212953 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e531c0ad449fbe8263f118458e677222d5af5abf33ad8c77cbc1152855f23f5 
	I0110 02:47:10.970994  212953 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:47:10.971402  212953 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:47:10.971509  212953 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:47:10.971524  212953 cni.go:84] Creating CNI manager for ""
	I0110 02:47:10.971531  212953 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:47:10.976634  212953 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0110 02:47:10.979529  212953 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 02:47:10.984530  212953 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 02:47:10.984559  212953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 02:47:11.001853  212953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 02:47:11.303580  212953 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 02:47:11.303707  212953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:47:11.303823  212953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-676905 minikube.k8s.io/updated_at=2026_01_10T02_47_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510 minikube.k8s.io/name=no-preload-676905 minikube.k8s.io/primary=true
	I0110 02:47:11.475885  212953 ops.go:34] apiserver oom_adj: -16
	I0110 02:47:11.475993  212953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:47:11.976613  212953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:47:12.476643  212953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:47:12.976830  212953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:47:13.476053  212953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:47:13.976099  212953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:47:14.476905  212953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:47:14.976397  212953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:47:15.477020  212953 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:47:15.597528  212953 kubeadm.go:1114] duration metric: took 4.293865559s to wait for elevateKubeSystemPrivileges
	I0110 02:47:15.597559  212953 kubeadm.go:403] duration metric: took 16.771387233s to StartCluster
	I0110 02:47:15.597583  212953 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:47:15.597642  212953 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:47:15.598265  212953 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:47:15.598485  212953 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:47:15.598605  212953 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 02:47:15.598847  212953 config.go:182] Loaded profile config "no-preload-676905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:47:15.598889  212953 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:47:15.598954  212953 addons.go:70] Setting storage-provisioner=true in profile "no-preload-676905"
	I0110 02:47:15.598968  212953 addons.go:239] Setting addon storage-provisioner=true in "no-preload-676905"
	I0110 02:47:15.598988  212953 host.go:66] Checking if "no-preload-676905" exists ...
	I0110 02:47:15.599481  212953 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Status}}
	I0110 02:47:15.599894  212953 addons.go:70] Setting default-storageclass=true in profile "no-preload-676905"
	I0110 02:47:15.599919  212953 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-676905"
	I0110 02:47:15.600204  212953 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Status}}
	I0110 02:47:15.610025  212953 out.go:179] * Verifying Kubernetes components...
	I0110 02:47:15.614343  212953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:47:15.634529  212953 addons.go:239] Setting addon default-storageclass=true in "no-preload-676905"
	I0110 02:47:15.634573  212953 host.go:66] Checking if "no-preload-676905" exists ...
	I0110 02:47:15.635047  212953 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Status}}
	I0110 02:47:15.661340  212953 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:47:15.664329  212953 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:47:15.664350  212953 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:47:15.664451  212953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:47:15.674908  212953 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:47:15.674928  212953 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:47:15.675010  212953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:47:15.703324  212953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:47:15.722628  212953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:47:15.908143  212953 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 02:47:15.917561  212953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:47:15.979108  212953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:47:16.055135  212953 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:47:16.562527  212953 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0110 02:47:16.564448  212953 node_ready.go:35] waiting up to 6m0s for node "no-preload-676905" to be "Ready" ...
	I0110 02:47:17.056833  212953 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0110 02:47:17.059708  212953 addons.go:530] duration metric: took 1.460802353s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0110 02:47:17.071861  212953 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-676905" context rescaled to 1 replicas
	W0110 02:47:18.567673  212953 node_ready.go:57] node "no-preload-676905" has "Ready":"False" status (will retry)
	W0110 02:47:20.568045  212953 node_ready.go:57] node "no-preload-676905" has "Ready":"False" status (will retry)
	W0110 02:47:23.068683  212953 node_ready.go:57] node "no-preload-676905" has "Ready":"False" status (will retry)
	W0110 02:47:25.567681  212953 node_ready.go:57] node "no-preload-676905" has "Ready":"False" status (will retry)
	W0110 02:47:28.067736  212953 node_ready.go:57] node "no-preload-676905" has "Ready":"False" status (will retry)
	I0110 02:47:29.568319  212953 node_ready.go:49] node "no-preload-676905" is "Ready"
	I0110 02:47:29.568347  212953 node_ready.go:38] duration metric: took 13.00385895s for node "no-preload-676905" to be "Ready" ...
	I0110 02:47:29.568360  212953 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:47:29.568446  212953 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:47:29.580665  212953 api_server.go:72] duration metric: took 13.982142979s to wait for apiserver process to appear ...
	I0110 02:47:29.580695  212953 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:47:29.580715  212953 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:47:29.590907  212953 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:47:29.592718  212953 api_server.go:141] control plane version: v1.35.0
	I0110 02:47:29.592747  212953 api_server.go:131] duration metric: took 12.044575ms to wait for apiserver health ...
	I0110 02:47:29.592756  212953 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:47:29.599000  212953 system_pods.go:59] 8 kube-system pods found
	I0110 02:47:29.599115  212953 system_pods.go:61] "coredns-7d764666f9-v67dz" [d212f09c-4573-4e47-ad15-50c9fdfeecd6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:47:29.599164  212953 system_pods.go:61] "etcd-no-preload-676905" [057fbf4d-96fb-423a-8d97-26392378f6a2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:47:29.599191  212953 system_pods.go:61] "kindnet-tsk2v" [d1006bf9-a95f-4260-9048-a78402602ff2] Running
	I0110 02:47:29.599213  212953 system_pods.go:61] "kube-apiserver-no-preload-676905" [9f7e4ecd-63b8-42f7-8ef7-a4ce47c14578] Running
	I0110 02:47:29.599296  212953 system_pods.go:61] "kube-controller-manager-no-preload-676905" [2baf8a7d-876b-4afe-a381-56e08abc7cc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:47:29.599325  212953 system_pods.go:61] "kube-proxy-r74hc" [0b78d574-77fa-4ec1-b986-d412d22f6a13] Running
	I0110 02:47:29.599351  212953 system_pods.go:61] "kube-scheduler-no-preload-676905" [71e28fec-7176-4dd8-89e0-77b4b1637652] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:47:29.599399  212953 system_pods.go:61] "storage-provisioner" [2867de9b-b16f-4cff-b553-1fb2f52db72b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:47:29.599422  212953 system_pods.go:74] duration metric: took 6.658709ms to wait for pod list to return data ...
	I0110 02:47:29.599458  212953 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:47:29.607706  212953 default_sa.go:45] found service account: "default"
	I0110 02:47:29.607777  212953 default_sa.go:55] duration metric: took 8.294169ms for default service account to be created ...
	I0110 02:47:29.607842  212953 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:47:29.610668  212953 system_pods.go:86] 8 kube-system pods found
	I0110 02:47:29.610707  212953 system_pods.go:89] "coredns-7d764666f9-v67dz" [d212f09c-4573-4e47-ad15-50c9fdfeecd6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:47:29.610715  212953 system_pods.go:89] "etcd-no-preload-676905" [057fbf4d-96fb-423a-8d97-26392378f6a2] Running
	I0110 02:47:29.610723  212953 system_pods.go:89] "kindnet-tsk2v" [d1006bf9-a95f-4260-9048-a78402602ff2] Running
	I0110 02:47:29.610729  212953 system_pods.go:89] "kube-apiserver-no-preload-676905" [9f7e4ecd-63b8-42f7-8ef7-a4ce47c14578] Running
	I0110 02:47:29.610737  212953 system_pods.go:89] "kube-controller-manager-no-preload-676905" [2baf8a7d-876b-4afe-a381-56e08abc7cc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:47:29.610744  212953 system_pods.go:89] "kube-proxy-r74hc" [0b78d574-77fa-4ec1-b986-d412d22f6a13] Running
	I0110 02:47:29.610751  212953 system_pods.go:89] "kube-scheduler-no-preload-676905" [71e28fec-7176-4dd8-89e0-77b4b1637652] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:47:29.610759  212953 system_pods.go:89] "storage-provisioner" [2867de9b-b16f-4cff-b553-1fb2f52db72b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:47:29.610783  212953 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0110 02:47:29.844925  212953 system_pods.go:86] 8 kube-system pods found
	I0110 02:47:29.844964  212953 system_pods.go:89] "coredns-7d764666f9-v67dz" [d212f09c-4573-4e47-ad15-50c9fdfeecd6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:47:29.844971  212953 system_pods.go:89] "etcd-no-preload-676905" [057fbf4d-96fb-423a-8d97-26392378f6a2] Running
	I0110 02:47:29.844978  212953 system_pods.go:89] "kindnet-tsk2v" [d1006bf9-a95f-4260-9048-a78402602ff2] Running
	I0110 02:47:29.844983  212953 system_pods.go:89] "kube-apiserver-no-preload-676905" [9f7e4ecd-63b8-42f7-8ef7-a4ce47c14578] Running
	I0110 02:47:29.844990  212953 system_pods.go:89] "kube-controller-manager-no-preload-676905" [2baf8a7d-876b-4afe-a381-56e08abc7cc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:47:29.844998  212953 system_pods.go:89] "kube-proxy-r74hc" [0b78d574-77fa-4ec1-b986-d412d22f6a13] Running
	I0110 02:47:29.845005  212953 system_pods.go:89] "kube-scheduler-no-preload-676905" [71e28fec-7176-4dd8-89e0-77b4b1637652] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:47:29.845012  212953 system_pods.go:89] "storage-provisioner" [2867de9b-b16f-4cff-b553-1fb2f52db72b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:47:30.120664  212953 system_pods.go:86] 8 kube-system pods found
	I0110 02:47:30.120704  212953 system_pods.go:89] "coredns-7d764666f9-v67dz" [d212f09c-4573-4e47-ad15-50c9fdfeecd6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:47:30.120721  212953 system_pods.go:89] "etcd-no-preload-676905" [057fbf4d-96fb-423a-8d97-26392378f6a2] Running
	I0110 02:47:30.120728  212953 system_pods.go:89] "kindnet-tsk2v" [d1006bf9-a95f-4260-9048-a78402602ff2] Running
	I0110 02:47:30.120735  212953 system_pods.go:89] "kube-apiserver-no-preload-676905" [9f7e4ecd-63b8-42f7-8ef7-a4ce47c14578] Running
	I0110 02:47:30.120749  212953 system_pods.go:89] "kube-controller-manager-no-preload-676905" [2baf8a7d-876b-4afe-a381-56e08abc7cc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:47:30.120758  212953 system_pods.go:89] "kube-proxy-r74hc" [0b78d574-77fa-4ec1-b986-d412d22f6a13] Running
	I0110 02:47:30.120765  212953 system_pods.go:89] "kube-scheduler-no-preload-676905" [71e28fec-7176-4dd8-89e0-77b4b1637652] Running
	I0110 02:47:30.120772  212953 system_pods.go:89] "storage-provisioner" [2867de9b-b16f-4cff-b553-1fb2f52db72b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:47:30.567750  212953 system_pods.go:86] 8 kube-system pods found
	I0110 02:47:30.567782  212953 system_pods.go:89] "coredns-7d764666f9-v67dz" [d212f09c-4573-4e47-ad15-50c9fdfeecd6] Running
	I0110 02:47:30.567789  212953 system_pods.go:89] "etcd-no-preload-676905" [057fbf4d-96fb-423a-8d97-26392378f6a2] Running
	I0110 02:47:30.567819  212953 system_pods.go:89] "kindnet-tsk2v" [d1006bf9-a95f-4260-9048-a78402602ff2] Running
	I0110 02:47:30.567825  212953 system_pods.go:89] "kube-apiserver-no-preload-676905" [9f7e4ecd-63b8-42f7-8ef7-a4ce47c14578] Running
	I0110 02:47:30.567831  212953 system_pods.go:89] "kube-controller-manager-no-preload-676905" [2baf8a7d-876b-4afe-a381-56e08abc7cc2] Running
	I0110 02:47:30.567835  212953 system_pods.go:89] "kube-proxy-r74hc" [0b78d574-77fa-4ec1-b986-d412d22f6a13] Running
	I0110 02:47:30.567840  212953 system_pods.go:89] "kube-scheduler-no-preload-676905" [71e28fec-7176-4dd8-89e0-77b4b1637652] Running
	I0110 02:47:30.567844  212953 system_pods.go:89] "storage-provisioner" [2867de9b-b16f-4cff-b553-1fb2f52db72b] Running
	I0110 02:47:30.567851  212953 system_pods.go:126] duration metric: took 959.99857ms to wait for k8s-apps to be running ...
	I0110 02:47:30.567859  212953 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:47:30.567915  212953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:47:30.581894  212953 system_svc.go:56] duration metric: took 14.026217ms WaitForService to wait for kubelet
	I0110 02:47:30.581926  212953 kubeadm.go:587] duration metric: took 14.983407977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:47:30.581944  212953 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:47:30.585151  212953 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:47:30.585183  212953 node_conditions.go:123] node cpu capacity is 2
	I0110 02:47:30.585199  212953 node_conditions.go:105] duration metric: took 3.248651ms to run NodePressure ...
	I0110 02:47:30.585213  212953 start.go:242] waiting for startup goroutines ...
	I0110 02:47:30.585221  212953 start.go:247] waiting for cluster config update ...
	I0110 02:47:30.585231  212953 start.go:256] writing updated cluster config ...
	I0110 02:47:30.585511  212953 ssh_runner.go:195] Run: rm -f paused
	I0110 02:47:30.589466  212953 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:47:30.592838  212953 pod_ready.go:83] waiting for pod "coredns-7d764666f9-v67dz" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:47:30.597518  212953 pod_ready.go:94] pod "coredns-7d764666f9-v67dz" is "Ready"
	I0110 02:47:30.597589  212953 pod_ready.go:86] duration metric: took 4.673581ms for pod "coredns-7d764666f9-v67dz" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:47:30.600107  212953 pod_ready.go:83] waiting for pod "etcd-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:47:30.604767  212953 pod_ready.go:94] pod "etcd-no-preload-676905" is "Ready"
	I0110 02:47:30.604793  212953 pod_ready.go:86] duration metric: took 4.663325ms for pod "etcd-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:47:30.607061  212953 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:47:30.614287  212953 pod_ready.go:94] pod "kube-apiserver-no-preload-676905" is "Ready"
	I0110 02:47:30.614316  212953 pod_ready.go:86] duration metric: took 7.234259ms for pod "kube-apiserver-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:47:30.619235  212953 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:47:30.993410  212953 pod_ready.go:94] pod "kube-controller-manager-no-preload-676905" is "Ready"
	I0110 02:47:30.993440  212953 pod_ready.go:86] duration metric: took 374.175579ms for pod "kube-controller-manager-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:47:31.194319  212953 pod_ready.go:83] waiting for pod "kube-proxy-r74hc" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:47:31.593882  212953 pod_ready.go:94] pod "kube-proxy-r74hc" is "Ready"
	I0110 02:47:31.593958  212953 pod_ready.go:86] duration metric: took 399.601096ms for pod "kube-proxy-r74hc" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:47:31.793916  212953 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:47:32.193212  212953 pod_ready.go:94] pod "kube-scheduler-no-preload-676905" is "Ready"
	I0110 02:47:32.193240  212953 pod_ready.go:86] duration metric: took 399.243109ms for pod "kube-scheduler-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:47:32.193255  212953 pod_ready.go:40] duration metric: took 1.603723865s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:47:32.242137  212953 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 02:47:32.245336  212953 out.go:203] 
	W0110 02:47:32.248368  212953 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 02:47:32.251329  212953 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:47:32.255045  212953 out.go:179] * Done! kubectl is now configured to use "no-preload-676905" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:47:29 no-preload-676905 crio[835]: time="2026-01-10T02:47:29.769046343Z" level=info msg="Created container f8f893d20785dda6ad705a1941defa376e53db224e3a9320f16c6c601bace131: kube-system/coredns-7d764666f9-v67dz/coredns" id=acaad010-c69e-4d58-98d4-10975e223306 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:47:29 no-preload-676905 crio[835]: time="2026-01-10T02:47:29.770156762Z" level=info msg="Starting container: f8f893d20785dda6ad705a1941defa376e53db224e3a9320f16c6c601bace131" id=51925698-dc46-4142-be39-4476e7d37d36 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:47:29 no-preload-676905 crio[835]: time="2026-01-10T02:47:29.772285345Z" level=info msg="Started container" PID=2426 containerID=f8f893d20785dda6ad705a1941defa376e53db224e3a9320f16c6c601bace131 description=kube-system/coredns-7d764666f9-v67dz/coredns id=51925698-dc46-4142-be39-4476e7d37d36 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0fab66193b43d947ca28d5050dec97ff4edfebd33411117d91285f169c698dd2
	Jan 10 02:47:32 no-preload-676905 crio[835]: time="2026-01-10T02:47:32.740387245Z" level=info msg="Running pod sandbox: default/busybox/POD" id=63e860aa-5921-4a03-8fc3-1cdecf03576b name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:47:32 no-preload-676905 crio[835]: time="2026-01-10T02:47:32.74046392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:47:32 no-preload-676905 crio[835]: time="2026-01-10T02:47:32.745473566Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7906f408ec46175f838e385c5fde9f23ab72ee2cd2785662899a03b5f507fe96 UID:a06101cc-6efa-4b52-aa20-b89a0e6bf859 NetNS:/var/run/netns/2ebe3cfc-6769-4872-979f-bdff98e753cf Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004b33c8}] Aliases:map[]}"
	Jan 10 02:47:32 no-preload-676905 crio[835]: time="2026-01-10T02:47:32.745507312Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 10 02:47:32 no-preload-676905 crio[835]: time="2026-01-10T02:47:32.76081987Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7906f408ec46175f838e385c5fde9f23ab72ee2cd2785662899a03b5f507fe96 UID:a06101cc-6efa-4b52-aa20-b89a0e6bf859 NetNS:/var/run/netns/2ebe3cfc-6769-4872-979f-bdff98e753cf Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004b33c8}] Aliases:map[]}"
	Jan 10 02:47:32 no-preload-676905 crio[835]: time="2026-01-10T02:47:32.760967492Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 10 02:47:32 no-preload-676905 crio[835]: time="2026-01-10T02:47:32.764760408Z" level=info msg="Ran pod sandbox 7906f408ec46175f838e385c5fde9f23ab72ee2cd2785662899a03b5f507fe96 with infra container: default/busybox/POD" id=63e860aa-5921-4a03-8fc3-1cdecf03576b name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:47:32 no-preload-676905 crio[835]: time="2026-01-10T02:47:32.766121537Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8e3b70c9-fbe1-40ff-950e-c78c90fed6e4 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:47:32 no-preload-676905 crio[835]: time="2026-01-10T02:47:32.766248598Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8e3b70c9-fbe1-40ff-950e-c78c90fed6e4 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:47:32 no-preload-676905 crio[835]: time="2026-01-10T02:47:32.766327439Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=8e3b70c9-fbe1-40ff-950e-c78c90fed6e4 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:47:32 no-preload-676905 crio[835]: time="2026-01-10T02:47:32.767478136Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=72e4e23a-7228-4915-a1aa-1360e2c4b5e6 name=/runtime.v1.ImageService/PullImage
	Jan 10 02:47:32 no-preload-676905 crio[835]: time="2026-01-10T02:47:32.768053825Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 10 02:47:34 no-preload-676905 crio[835]: time="2026-01-10T02:47:34.858742372Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=72e4e23a-7228-4915-a1aa-1360e2c4b5e6 name=/runtime.v1.ImageService/PullImage
	Jan 10 02:47:34 no-preload-676905 crio[835]: time="2026-01-10T02:47:34.859304728Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=88001770-6066-4c8c-ade7-f41a8c144df7 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:47:34 no-preload-676905 crio[835]: time="2026-01-10T02:47:34.861021581Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=313fee90-811b-41e5-a397-bcd30aa95931 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:47:34 no-preload-676905 crio[835]: time="2026-01-10T02:47:34.866208207Z" level=info msg="Creating container: default/busybox/busybox" id=30c9f750-d7b1-40cf-aaf6-fac21a039b0e name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:47:34 no-preload-676905 crio[835]: time="2026-01-10T02:47:34.866301923Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:47:34 no-preload-676905 crio[835]: time="2026-01-10T02:47:34.870761135Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:47:34 no-preload-676905 crio[835]: time="2026-01-10T02:47:34.871336455Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:47:34 no-preload-676905 crio[835]: time="2026-01-10T02:47:34.885119469Z" level=info msg="Created container 3c2ae5e74cdd286cb85235f8fbf72d1fd29486bfbbf1d5aed60c96808ac1c128: default/busybox/busybox" id=30c9f750-d7b1-40cf-aaf6-fac21a039b0e name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:47:34 no-preload-676905 crio[835]: time="2026-01-10T02:47:34.88769508Z" level=info msg="Starting container: 3c2ae5e74cdd286cb85235f8fbf72d1fd29486bfbbf1d5aed60c96808ac1c128" id=fc654d83-58e7-4413-ad0d-b8c9d0538403 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:47:34 no-preload-676905 crio[835]: time="2026-01-10T02:47:34.89015318Z" level=info msg="Started container" PID=2485 containerID=3c2ae5e74cdd286cb85235f8fbf72d1fd29486bfbbf1d5aed60c96808ac1c128 description=default/busybox/busybox id=fc654d83-58e7-4413-ad0d-b8c9d0538403 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7906f408ec46175f838e385c5fde9f23ab72ee2cd2785662899a03b5f507fe96
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3c2ae5e74cdd2       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago       Running             busybox                   0                   7906f408ec461       busybox                                     default
	f8f893d20785d       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      14 seconds ago      Running             coredns                   0                   0fab66193b43d       coredns-7d764666f9-v67dz                    kube-system
	bf2309355ff73       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      14 seconds ago      Running             storage-provisioner       0                   57a2d58337f4b       storage-provisioner                         kube-system
	ebd4dacce5910       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    25 seconds ago      Running             kindnet-cni               0                   c9dbab24a4461       kindnet-tsk2v                               kube-system
	b43b13f04fcfb       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      27 seconds ago      Running             kube-proxy                0                   191b122eec00a       kube-proxy-r74hc                            kube-system
	db96befe68aa0       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      38 seconds ago      Running             kube-controller-manager   0                   68df3dc7f9086       kube-controller-manager-no-preload-676905   kube-system
	0a434ca450086       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      38 seconds ago      Running             kube-apiserver            0                   4a121f8fb0d01       kube-apiserver-no-preload-676905            kube-system
	0733808122dd7       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      38 seconds ago      Running             kube-scheduler            0                   f8de28ced40ed       kube-scheduler-no-preload-676905            kube-system
	c521bd37661ee       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      38 seconds ago      Running             etcd                      0                   9aabc5fbcd7d7       etcd-no-preload-676905                      kube-system
	
	
	==> coredns [f8f893d20785dda6ad705a1941defa376e53db224e3a9320f16c6c601bace131] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:43650 - 38479 "HINFO IN 1552714697426457917.3897817261833987490. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027085796s
	
	
	==> describe nodes <==
	Name:               no-preload-676905
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-676905
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=no-preload-676905
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_47_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:47:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-676905
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:47:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:47:41 +0000   Sat, 10 Jan 2026 02:47:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:47:41 +0000   Sat, 10 Jan 2026 02:47:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:47:41 +0000   Sat, 10 Jan 2026 02:47:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:47:41 +0000   Sat, 10 Jan 2026 02:47:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-676905
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                f6842875-d9d6-4f29-b119-b957541c22e9
	  Boot ID:                    41f52236-76b9-4dc8-bfe3-121fbdeb9659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-7d764666f9-v67dz                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-no-preload-676905                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-tsk2v                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-no-preload-676905             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-676905    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-r74hc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-no-preload-676905             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  30s   node-controller  Node no-preload-676905 event: Registered Node no-preload-676905 in Controller
	
	
	==> dmesg <==
	[Jan10 02:14] overlayfs: idmapped layers are currently not supported
	[ +27.765975] overlayfs: idmapped layers are currently not supported
	[Jan10 02:15] overlayfs: idmapped layers are currently not supported
	[Jan10 02:16] overlayfs: idmapped layers are currently not supported
	[Jan10 02:17] overlayfs: idmapped layers are currently not supported
	[Jan10 02:18] overlayfs: idmapped layers are currently not supported
	[ +23.212409] overlayfs: idmapped layers are currently not supported
	[  +9.866169] overlayfs: idmapped layers are currently not supported
	[Jan10 02:19] overlayfs: idmapped layers are currently not supported
	[Jan10 02:20] overlayfs: idmapped layers are currently not supported
	[ +35.960240] overlayfs: idmapped layers are currently not supported
	[Jan10 02:21] overlayfs: idmapped layers are currently not supported
	[Jan10 02:23] overlayfs: idmapped layers are currently not supported
	[Jan10 02:24] overlayfs: idmapped layers are currently not supported
	[Jan10 02:25] overlayfs: idmapped layers are currently not supported
	[Jan10 02:30] overlayfs: idmapped layers are currently not supported
	[Jan10 02:31] overlayfs: idmapped layers are currently not supported
	[Jan10 02:35] overlayfs: idmapped layers are currently not supported
	[Jan10 02:37] overlayfs: idmapped layers are currently not supported
	[Jan10 02:41] overlayfs: idmapped layers are currently not supported
	[ +37.534021] overlayfs: idmapped layers are currently not supported
	[Jan10 02:43] overlayfs: idmapped layers are currently not supported
	[Jan10 02:44] overlayfs: idmapped layers are currently not supported
	[Jan10 02:45] overlayfs: idmapped layers are currently not supported
	[Jan10 02:46] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c521bd37661eea848db1aeb309ad2fe93befca89b26dfd885671692c93d6bb6e] <==
	{"level":"info","ts":"2026-01-10T02:47:05.317199Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:47:05.670392Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T02:47:05.670454Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T02:47:05.670528Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2026-01-10T02:47:05.670549Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:47:05.670564Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:47:05.672035Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:47:05.672088Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:47:05.672116Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2026-01-10T02:47:05.672126Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:47:05.678342Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:47:05.679489Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-676905 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:47:05.679520Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:47:05.679615Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:47:05.680479Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:47:05.682591Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:47:05.683449Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:47:05.684191Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T02:47:05.687885Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:47:05.687958Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:47:05.688076Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:47:05.688174Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:47:05.688231Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:47:05.688310Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T02:47:05.688394Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	
	
	==> kernel <==
	 02:47:44 up  1:30,  0 user,  load average: 1.93, 1.79, 1.80
	Linux no-preload-676905 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ebd4dacce591004c3fc2b50ef1e837f783f752e5b46040c5a6bde664895b8d92] <==
	I0110 02:47:18.832826       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:47:18.924058       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 02:47:18.924188       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:47:18.924207       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:47:18.924224       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:47:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:47:19.124719       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:47:19.124887       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:47:19.125233       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:47:19.125402       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:47:19.425596       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:47:19.425632       1 metrics.go:72] Registering metrics
	I0110 02:47:19.425704       1 controller.go:711] "Syncing nftables rules"
	I0110 02:47:29.033918       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:47:29.033972       1 main.go:301] handling current node
	I0110 02:47:39.034259       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:47:39.034292       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0a434ca45008630ee1d1863e05848c7dbd57b0da6129dc5c007a078f716d9c69] <==
	I0110 02:47:07.737394       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:47:07.751184       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:47:07.760145       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 02:47:07.770835       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:47:07.770960       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 02:47:07.878860       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:47:08.340105       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 02:47:08.345880       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 02:47:08.345909       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:47:09.059273       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:47:09.113802       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:47:09.253152       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 02:47:09.260922       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0110 02:47:09.262112       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:47:09.267204       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:47:09.545108       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:47:10.378349       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:47:10.401311       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 02:47:10.413036       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 02:47:15.101075       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:47:15.300520       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0110 02:47:15.300572       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0110 02:47:15.382129       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:47:15.399874       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E0110 02:47:42.574254       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:41226: use of closed network connection
	
	
	==> kube-controller-manager [db96befe68aa0cf07294671c7b5832cd7126c465abbae2fd4e84cd79daf169c8] <==
	I0110 02:47:14.356222       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-676905"
	I0110 02:47:14.356406       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0110 02:47:14.356448       1 shared_informer.go:377] "Caches are synced"
	I0110 02:47:14.356688       1 shared_informer.go:377] "Caches are synced"
	I0110 02:47:14.358221       1 shared_informer.go:377] "Caches are synced"
	I0110 02:47:14.358539       1 shared_informer.go:377] "Caches are synced"
	I0110 02:47:14.358839       1 shared_informer.go:377] "Caches are synced"
	I0110 02:47:14.359288       1 shared_informer.go:377] "Caches are synced"
	I0110 02:47:14.359764       1 shared_informer.go:377] "Caches are synced"
	I0110 02:47:14.359934       1 shared_informer.go:377] "Caches are synced"
	I0110 02:47:14.360108       1 shared_informer.go:377] "Caches are synced"
	I0110 02:47:14.360732       1 shared_informer.go:377] "Caches are synced"
	I0110 02:47:14.351231       1 shared_informer.go:377] "Caches are synced"
	I0110 02:47:14.351211       1 shared_informer.go:377] "Caches are synced"
	I0110 02:47:14.373966       1 shared_informer.go:377] "Caches are synced"
	I0110 02:47:14.374022       1 shared_informer.go:377] "Caches are synced"
	I0110 02:47:14.374155       1 shared_informer.go:377] "Caches are synced"
	I0110 02:47:14.391113       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-676905" podCIDRs=["10.244.0.0/24"]
	I0110 02:47:14.392093       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:47:14.394955       1 shared_informer.go:377] "Caches are synced"
	I0110 02:47:14.451876       1 shared_informer.go:377] "Caches are synced"
	I0110 02:47:14.451898       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:47:14.451904       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:47:14.492468       1 shared_informer.go:377] "Caches are synced"
	I0110 02:47:29.358293       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [b43b13f04fcfb66093e028753dc34a72855b21857fa51020d8afc96804402518] <==
	I0110 02:47:16.530967       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:47:16.641545       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:47:16.752942       1 shared_informer.go:377] "Caches are synced"
	I0110 02:47:16.752981       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 02:47:16.753071       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:47:16.811864       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:47:16.811912       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:47:16.822079       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:47:16.822423       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:47:16.822450       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:47:16.824327       1 config.go:200] "Starting service config controller"
	I0110 02:47:16.824344       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:47:16.824362       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:47:16.824367       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:47:16.824384       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:47:16.824387       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:47:16.825151       1 config.go:309] "Starting node config controller"
	I0110 02:47:16.825164       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:47:16.825183       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:47:16.924422       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 02:47:16.924490       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:47:16.924516       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [0733808122dd7c9fd30be1c9cae3f54c00b9c8c7df56c537af8e7a469e757eb3] <==
	E0110 02:47:07.589635       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 02:47:07.589732       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 02:47:07.601066       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E0110 02:47:07.608442       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 02:47:07.608598       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 02:47:07.608679       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 02:47:07.608760       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 02:47:07.608833       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 02:47:07.608927       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 02:47:07.609008       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 02:47:07.609082       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 02:47:07.609178       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 02:47:07.609268       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 02:47:07.609354       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 02:47:08.444613       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 02:47:08.488356       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 02:47:08.523371       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E0110 02:47:08.539227       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 02:47:08.596010       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 02:47:08.621058       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 02:47:08.654287       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 02:47:08.655499       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 02:47:08.684067       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 02:47:08.785504       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	I0110 02:47:11.549057       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:47:15 no-preload-676905 kubelet[1933]: E0110 02:47:15.469228    1933 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b78d574-77fa-4ec1-b986-d412d22f6a13-kube-api-access-mjp4s podName:0b78d574-77fa-4ec1-b986-d412d22f6a13 nodeName:}" failed. No retries permitted until 2026-01-10 02:47:15.969202451 +0000 UTC m=+5.762236098 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mjp4s" (UniqueName: "kubernetes.io/projected/0b78d574-77fa-4ec1-b986-d412d22f6a13-kube-api-access-mjp4s") pod "kube-proxy-r74hc" (UID: "0b78d574-77fa-4ec1-b986-d412d22f6a13") : configmap "kube-root-ca.crt" not found
	Jan 10 02:47:15 no-preload-676905 kubelet[1933]: I0110 02:47:15.565625    1933 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jan 10 02:47:16 no-preload-676905 kubelet[1933]: W0110 02:47:16.282965    1933 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49/crio-191b122eec00a82b554362bf6c75b7a996cbf0d3e714be9dcfccf596e64990fb WatchSource:0}: Error finding container 191b122eec00a82b554362bf6c75b7a996cbf0d3e714be9dcfccf596e64990fb: Status 404 returned error can't find the container with id 191b122eec00a82b554362bf6c75b7a996cbf0d3e714be9dcfccf596e64990fb
	Jan 10 02:47:16 no-preload-676905 kubelet[1933]: E0110 02:47:16.505144    1933 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-676905" containerName="kube-apiserver"
	Jan 10 02:47:19 no-preload-676905 kubelet[1933]: I0110 02:47:19.420394    1933 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-r74hc" podStartSLOduration=4.420377542 podStartE2EDuration="4.420377542s" podCreationTimestamp="2026-01-10 02:47:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:47:17.414203937 +0000 UTC m=+7.207237601" watchObservedRunningTime="2026-01-10 02:47:19.420377542 +0000 UTC m=+9.213411197"
	Jan 10 02:47:19 no-preload-676905 kubelet[1933]: E0110 02:47:19.563552    1933 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-676905" containerName="etcd"
	Jan 10 02:47:19 no-preload-676905 kubelet[1933]: I0110 02:47:19.576603    1933 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-tsk2v" podStartSLOduration=1.564765976 podStartE2EDuration="4.576586775s" podCreationTimestamp="2026-01-10 02:47:15 +0000 UTC" firstStartedPulling="2026-01-10 02:47:15.720661191 +0000 UTC m=+5.513694838" lastFinishedPulling="2026-01-10 02:47:18.73248199 +0000 UTC m=+8.525515637" observedRunningTime="2026-01-10 02:47:19.421033943 +0000 UTC m=+9.214067590" watchObservedRunningTime="2026-01-10 02:47:19.576586775 +0000 UTC m=+9.369620430"
	Jan 10 02:47:20 no-preload-676905 kubelet[1933]: E0110 02:47:20.091419    1933 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-676905" containerName="kube-scheduler"
	Jan 10 02:47:20 no-preload-676905 kubelet[1933]: E0110 02:47:20.178128    1933 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-676905" containerName="kube-controller-manager"
	Jan 10 02:47:26 no-preload-676905 kubelet[1933]: E0110 02:47:26.518021    1933 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-676905" containerName="kube-apiserver"
	Jan 10 02:47:29 no-preload-676905 kubelet[1933]: I0110 02:47:29.295252    1933 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 10 02:47:29 no-preload-676905 kubelet[1933]: I0110 02:47:29.376613    1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d212f09c-4573-4e47-ad15-50c9fdfeecd6-config-volume\") pod \"coredns-7d764666f9-v67dz\" (UID: \"d212f09c-4573-4e47-ad15-50c9fdfeecd6\") " pod="kube-system/coredns-7d764666f9-v67dz"
	Jan 10 02:47:29 no-preload-676905 kubelet[1933]: I0110 02:47:29.376662    1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrv7r\" (UniqueName: \"kubernetes.io/projected/d212f09c-4573-4e47-ad15-50c9fdfeecd6-kube-api-access-xrv7r\") pod \"coredns-7d764666f9-v67dz\" (UID: \"d212f09c-4573-4e47-ad15-50c9fdfeecd6\") " pod="kube-system/coredns-7d764666f9-v67dz"
	Jan 10 02:47:29 no-preload-676905 kubelet[1933]: I0110 02:47:29.376688    1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2867de9b-b16f-4cff-b553-1fb2f52db72b-tmp\") pod \"storage-provisioner\" (UID: \"2867de9b-b16f-4cff-b553-1fb2f52db72b\") " pod="kube-system/storage-provisioner"
	Jan 10 02:47:29 no-preload-676905 kubelet[1933]: I0110 02:47:29.376708    1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z58f\" (UniqueName: \"kubernetes.io/projected/2867de9b-b16f-4cff-b553-1fb2f52db72b-kube-api-access-6z58f\") pod \"storage-provisioner\" (UID: \"2867de9b-b16f-4cff-b553-1fb2f52db72b\") " pod="kube-system/storage-provisioner"
	Jan 10 02:47:29 no-preload-676905 kubelet[1933]: E0110 02:47:29.564956    1933 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-676905" containerName="etcd"
	Jan 10 02:47:30 no-preload-676905 kubelet[1933]: E0110 02:47:30.100850    1933 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-676905" containerName="kube-scheduler"
	Jan 10 02:47:30 no-preload-676905 kubelet[1933]: E0110 02:47:30.187110    1933 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-676905" containerName="kube-controller-manager"
	Jan 10 02:47:30 no-preload-676905 kubelet[1933]: E0110 02:47:30.430945    1933 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-v67dz" containerName="coredns"
	Jan 10 02:47:30 no-preload-676905 kubelet[1933]: I0110 02:47:30.456974    1933 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-v67dz" podStartSLOduration=15.45696032 podStartE2EDuration="15.45696032s" podCreationTimestamp="2026-01-10 02:47:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:47:30.447643672 +0000 UTC m=+20.240677335" watchObservedRunningTime="2026-01-10 02:47:30.45696032 +0000 UTC m=+20.249993975"
	Jan 10 02:47:31 no-preload-676905 kubelet[1933]: E0110 02:47:31.434082    1933 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-v67dz" containerName="coredns"
	Jan 10 02:47:32 no-preload-676905 kubelet[1933]: I0110 02:47:32.430580    1933 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.430562092 podStartE2EDuration="15.430562092s" podCreationTimestamp="2026-01-10 02:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:47:30.521805447 +0000 UTC m=+20.314839094" watchObservedRunningTime="2026-01-10 02:47:32.430562092 +0000 UTC m=+22.223595739"
	Jan 10 02:47:32 no-preload-676905 kubelet[1933]: E0110 02:47:32.436773    1933 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-v67dz" containerName="coredns"
	Jan 10 02:47:32 no-preload-676905 kubelet[1933]: I0110 02:47:32.498230    1933 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lswwz\" (UniqueName: \"kubernetes.io/projected/a06101cc-6efa-4b52-aa20-b89a0e6bf859-kube-api-access-lswwz\") pod \"busybox\" (UID: \"a06101cc-6efa-4b52-aa20-b89a0e6bf859\") " pod="default/busybox"
	Jan 10 02:47:32 no-preload-676905 kubelet[1933]: W0110 02:47:32.764633    1933 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49/crio-7906f408ec46175f838e385c5fde9f23ab72ee2cd2785662899a03b5f507fe96 WatchSource:0}: Error finding container 7906f408ec46175f838e385c5fde9f23ab72ee2cd2785662899a03b5f507fe96: Status 404 returned error can't find the container with id 7906f408ec46175f838e385c5fde9f23ab72ee2cd2785662899a03b5f507fe96
	
	
	==> storage-provisioner [bf2309355ff73acda149feec1f7715a2a4387445d56b748e9ef9992f96b2128f] <==
	I0110 02:47:29.760550       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:47:29.788835       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:47:29.788977       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 02:47:29.793124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:47:29.800139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:47:29.800461       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:47:29.800698       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-676905_25ff8b69-8d74-4754-ac4e-1c58a0693f77!
	I0110 02:47:29.801505       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d3b5d431-8073-4c52-ab3f-9b40b241d7ee", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-676905_25ff8b69-8d74-4754-ac4e-1c58a0693f77 became leader
	W0110 02:47:29.808084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:47:29.839171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:47:29.900863       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-676905_25ff8b69-8d74-4754-ac4e-1c58a0693f77!
	W0110 02:47:31.842748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:47:31.849804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:47:33.853213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:47:33.860831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:47:35.863443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:47:35.869984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:47:37.872779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:47:37.877240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:47:39.880990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:47:39.887584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:47:41.890727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:47:41.895292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:47:43.898415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:47:43.905812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-676905 -n no-preload-676905
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-676905 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (8.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-676905 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-676905 --alsologtostderr -v=1: exit status 80 (2.559984453s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-676905 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:48:56.421576  220152 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:48:56.421715  220152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:48:56.421726  220152 out.go:374] Setting ErrFile to fd 2...
	I0110 02:48:56.421746  220152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:48:56.422020  220152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:48:56.422326  220152 out.go:368] Setting JSON to false
	I0110 02:48:56.422347  220152 mustload.go:66] Loading cluster: no-preload-676905
	I0110 02:48:56.422792  220152 config.go:182] Loaded profile config "no-preload-676905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:48:56.423287  220152 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Status}}
	I0110 02:48:56.440945  220152 host.go:66] Checking if "no-preload-676905" exists ...
	I0110 02:48:56.441975  220152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:48:56.505531  220152 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2026-01-10 02:48:56.489442021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:48:56.506176  220152 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22414/minikube-v1.37.0-1767924026-22414-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767924026-22414/minikube-v1.37.0-1767924026-22414-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767924026-22414-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:no-preload-676905 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 02:48:56.510957  220152 out.go:179] * Pausing node no-preload-676905 ... 
	I0110 02:48:56.514564  220152 host.go:66] Checking if "no-preload-676905" exists ...
	I0110 02:48:56.514979  220152 ssh_runner.go:195] Run: systemctl --version
	I0110 02:48:56.515044  220152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:56.540992  220152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:48:56.642719  220152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:48:56.655967  220152 pause.go:52] kubelet running: true
	I0110 02:48:56.656064  220152 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:48:56.883008  220152 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:48:56.883098  220152 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:48:56.966041  220152 cri.go:96] found id: "92b43250d4836d6795421decd73ba4fd8ee1fd121d721eb0db2e6514bbd285c8"
	I0110 02:48:56.966066  220152 cri.go:96] found id: "89055935644acadcf02b28ab8d6db16656e01c56a0338f89d963ab2396bdcd1c"
	I0110 02:48:56.966070  220152 cri.go:96] found id: "068ee48cb5d42288b992f6422f0f6edf35a1579daa69d7eb3ffb9e08f9182b2a"
	I0110 02:48:56.966074  220152 cri.go:96] found id: "27193b01d487358eda6303830a14176f18fda2cc8ccb2d5478cc50ca9111e0fa"
	I0110 02:48:56.966078  220152 cri.go:96] found id: "93572ed1012d65280df6c8e67b6f1922bebba09495f4249d9770c01186da9814"
	I0110 02:48:56.966081  220152 cri.go:96] found id: "6896022c3f268165644bf92599f94c980310297bd558b3038a54e557be2f3059"
	I0110 02:48:56.966084  220152 cri.go:96] found id: "a7ff1e6390fa775c123c587323c7e2f6cec3b4b2d846529ec2347776ff26d6f4"
	I0110 02:48:56.966108  220152 cri.go:96] found id: "d553ba4a2565012123c50e31a0791c4773b9cf73ab9ea054e4d6cac50e536f76"
	I0110 02:48:56.966119  220152 cri.go:96] found id: "e373472937baf1f075e16e1423aa5e5fbcd79cb428ae5d27b8fc181cd4d93dae"
	I0110 02:48:56.966137  220152 cri.go:96] found id: "9ca2cd8a5c9b3072491c22f97f3f70d330eac60ee6c190558e871b3e7c11e63b"
	I0110 02:48:56.966141  220152 cri.go:96] found id: "1f6b2e2b36205985d05a8d8ccaa07bf33db63d212ad795c058c0b9fffacd012a"
	I0110 02:48:56.966145  220152 cri.go:96] found id: ""
	I0110 02:48:56.966215  220152 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:48:56.977732  220152 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:48:56Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:48:57.263223  220152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:48:57.277403  220152 pause.go:52] kubelet running: false
	I0110 02:48:57.277525  220152 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:48:57.503706  220152 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:48:57.503782  220152 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:48:57.638972  220152 cri.go:96] found id: "92b43250d4836d6795421decd73ba4fd8ee1fd121d721eb0db2e6514bbd285c8"
	I0110 02:48:57.639015  220152 cri.go:96] found id: "89055935644acadcf02b28ab8d6db16656e01c56a0338f89d963ab2396bdcd1c"
	I0110 02:48:57.639021  220152 cri.go:96] found id: "068ee48cb5d42288b992f6422f0f6edf35a1579daa69d7eb3ffb9e08f9182b2a"
	I0110 02:48:57.639025  220152 cri.go:96] found id: "27193b01d487358eda6303830a14176f18fda2cc8ccb2d5478cc50ca9111e0fa"
	I0110 02:48:57.639028  220152 cri.go:96] found id: "93572ed1012d65280df6c8e67b6f1922bebba09495f4249d9770c01186da9814"
	I0110 02:48:57.639031  220152 cri.go:96] found id: "6896022c3f268165644bf92599f94c980310297bd558b3038a54e557be2f3059"
	I0110 02:48:57.639034  220152 cri.go:96] found id: "a7ff1e6390fa775c123c587323c7e2f6cec3b4b2d846529ec2347776ff26d6f4"
	I0110 02:48:57.639037  220152 cri.go:96] found id: "d553ba4a2565012123c50e31a0791c4773b9cf73ab9ea054e4d6cac50e536f76"
	I0110 02:48:57.639040  220152 cri.go:96] found id: "e373472937baf1f075e16e1423aa5e5fbcd79cb428ae5d27b8fc181cd4d93dae"
	I0110 02:48:57.639055  220152 cri.go:96] found id: "9ca2cd8a5c9b3072491c22f97f3f70d330eac60ee6c190558e871b3e7c11e63b"
	I0110 02:48:57.639059  220152 cri.go:96] found id: "1f6b2e2b36205985d05a8d8ccaa07bf33db63d212ad795c058c0b9fffacd012a"
	I0110 02:48:57.639062  220152 cri.go:96] found id: ""
	I0110 02:48:57.639117  220152 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:48:57.950032  220152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:48:57.974202  220152 pause.go:52] kubelet running: false
	I0110 02:48:57.974275  220152 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:48:58.211052  220152 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:48:58.211135  220152 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:48:58.296350  220152 cri.go:96] found id: "92b43250d4836d6795421decd73ba4fd8ee1fd121d721eb0db2e6514bbd285c8"
	I0110 02:48:58.296370  220152 cri.go:96] found id: "89055935644acadcf02b28ab8d6db16656e01c56a0338f89d963ab2396bdcd1c"
	I0110 02:48:58.296375  220152 cri.go:96] found id: "068ee48cb5d42288b992f6422f0f6edf35a1579daa69d7eb3ffb9e08f9182b2a"
	I0110 02:48:58.296379  220152 cri.go:96] found id: "27193b01d487358eda6303830a14176f18fda2cc8ccb2d5478cc50ca9111e0fa"
	I0110 02:48:58.296383  220152 cri.go:96] found id: "93572ed1012d65280df6c8e67b6f1922bebba09495f4249d9770c01186da9814"
	I0110 02:48:58.296387  220152 cri.go:96] found id: "6896022c3f268165644bf92599f94c980310297bd558b3038a54e557be2f3059"
	I0110 02:48:58.296390  220152 cri.go:96] found id: "a7ff1e6390fa775c123c587323c7e2f6cec3b4b2d846529ec2347776ff26d6f4"
	I0110 02:48:58.296393  220152 cri.go:96] found id: "d553ba4a2565012123c50e31a0791c4773b9cf73ab9ea054e4d6cac50e536f76"
	I0110 02:48:58.296396  220152 cri.go:96] found id: "e373472937baf1f075e16e1423aa5e5fbcd79cb428ae5d27b8fc181cd4d93dae"
	I0110 02:48:58.296403  220152 cri.go:96] found id: "9ca2cd8a5c9b3072491c22f97f3f70d330eac60ee6c190558e871b3e7c11e63b"
	I0110 02:48:58.296406  220152 cri.go:96] found id: "1f6b2e2b36205985d05a8d8ccaa07bf33db63d212ad795c058c0b9fffacd012a"
	I0110 02:48:58.296409  220152 cri.go:96] found id: ""
	I0110 02:48:58.296456  220152 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:48:58.608258  220152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:48:58.627208  220152 pause.go:52] kubelet running: false
	I0110 02:48:58.627283  220152 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:48:58.821239  220152 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:48:58.821324  220152 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:48:58.905462  220152 cri.go:96] found id: "92b43250d4836d6795421decd73ba4fd8ee1fd121d721eb0db2e6514bbd285c8"
	I0110 02:48:58.905483  220152 cri.go:96] found id: "89055935644acadcf02b28ab8d6db16656e01c56a0338f89d963ab2396bdcd1c"
	I0110 02:48:58.905489  220152 cri.go:96] found id: "068ee48cb5d42288b992f6422f0f6edf35a1579daa69d7eb3ffb9e08f9182b2a"
	I0110 02:48:58.905493  220152 cri.go:96] found id: "27193b01d487358eda6303830a14176f18fda2cc8ccb2d5478cc50ca9111e0fa"
	I0110 02:48:58.905496  220152 cri.go:96] found id: "93572ed1012d65280df6c8e67b6f1922bebba09495f4249d9770c01186da9814"
	I0110 02:48:58.905499  220152 cri.go:96] found id: "6896022c3f268165644bf92599f94c980310297bd558b3038a54e557be2f3059"
	I0110 02:48:58.905502  220152 cri.go:96] found id: "a7ff1e6390fa775c123c587323c7e2f6cec3b4b2d846529ec2347776ff26d6f4"
	I0110 02:48:58.905506  220152 cri.go:96] found id: "d553ba4a2565012123c50e31a0791c4773b9cf73ab9ea054e4d6cac50e536f76"
	I0110 02:48:58.905509  220152 cri.go:96] found id: "e373472937baf1f075e16e1423aa5e5fbcd79cb428ae5d27b8fc181cd4d93dae"
	I0110 02:48:58.905519  220152 cri.go:96] found id: "9ca2cd8a5c9b3072491c22f97f3f70d330eac60ee6c190558e871b3e7c11e63b"
	I0110 02:48:58.905522  220152 cri.go:96] found id: "1f6b2e2b36205985d05a8d8ccaa07bf33db63d212ad795c058c0b9fffacd012a"
	I0110 02:48:58.905525  220152 cri.go:96] found id: ""
	I0110 02:48:58.905571  220152 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:48:58.921439  220152 out.go:203] 
	W0110 02:48:58.924547  220152 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:48:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:48:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 02:48:58.924627  220152 out.go:285] * 
	* 
	W0110 02:48:58.927405  220152 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:48:58.930618  220152 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-676905 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-676905
helpers_test.go:244: (dbg) docker inspect no-preload-676905:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49",
	        "Created": "2026-01-10T02:46:39.759659544Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 217584,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:47:57.540923895Z",
	            "FinishedAt": "2026-01-10T02:47:56.739492001Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49/hostname",
	        "HostsPath": "/var/lib/docker/containers/edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49/hosts",
	        "LogPath": "/var/lib/docker/containers/edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49/edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49-json.log",
	        "Name": "/no-preload-676905",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-676905:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-676905",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49",
	                "LowerDir": "/var/lib/docker/overlay2/fc94be4f76220c0d09795aa60f43389a06dd04e0a5955af0531cf368de6efc6b-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc94be4f76220c0d09795aa60f43389a06dd04e0a5955af0531cf368de6efc6b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc94be4f76220c0d09795aa60f43389a06dd04e0a5955af0531cf368de6efc6b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc94be4f76220c0d09795aa60f43389a06dd04e0a5955af0531cf368de6efc6b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-676905",
	                "Source": "/var/lib/docker/volumes/no-preload-676905/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-676905",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-676905",
	                "name.minikube.sigs.k8s.io": "no-preload-676905",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e4620211dc620f778dba6ef6c137059b746f1b912e7f2c299a3b784b6fbeb1d7",
	            "SandboxKey": "/var/run/docker/netns/e4620211dc62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-676905": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:c9:9a:b5:30:41",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "146cb46c14407056c3f694e77394bc66aacde5ff5ac19837f4400799ed6e0ce7",
	                    "EndpointID": "7c77d219ae9db9edb9e29758faac68904f0b903f7d4bb33eb868d6e80b15c6a7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-676905",
	                        "edb3b90bff05"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-676905 -n no-preload-676905
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-676905 -n no-preload-676905: exit status 2 (471.045689ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-676905 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-676905 logs -n 25: (1.885407613s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p old-k8s-version-736081 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-736081 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:42 UTC │
	│ start   │ -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:43 UTC │
	│ image   │ old-k8s-version-736081 image list --format=json                                                                                                                                                                                               │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ pause   │ -p old-k8s-version-736081 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │                     │
	│ delete  │ -p old-k8s-version-736081                                                                                                                                                                                                                     │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ delete  │ -p old-k8s-version-736081                                                                                                                                                                                                                     │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ start   │ -p embed-certs-290628 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ addons  │ enable metrics-server -p embed-certs-290628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │                     │
	│ stop    │ -p embed-certs-290628 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:45 UTC │
	│ addons  │ enable dashboard -p embed-certs-290628 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:45 UTC │
	│ start   │ -p embed-certs-290628 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:46 UTC │
	│ image   │ embed-certs-290628 image list --format=json                                                                                                                                                                                                   │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ pause   │ -p embed-certs-290628 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │                     │
	│ delete  │ -p embed-certs-290628                                                                                                                                                                                                                         │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ delete  │ -p embed-certs-290628                                                                                                                                                                                                                         │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ delete  │ -p disable-driver-mounts-990753                                                                                                                                                                                                               │ disable-driver-mounts-990753 │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ start   │ -p no-preload-676905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:47 UTC │
	│ addons  │ enable metrics-server -p no-preload-676905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │                     │
	│ stop    │ -p no-preload-676905 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:47 UTC │
	│ addons  │ enable dashboard -p no-preload-676905 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:47 UTC │
	│ start   │ -p no-preload-676905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:48 UTC │
	│ image   │ no-preload-676905 image list --format=json                                                                                                                                                                                                    │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │ 10 Jan 26 02:48 UTC │
	│ pause   │ -p no-preload-676905 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │                     │
	│ ssh     │ force-systemd-flag-038359 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-038359    │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │ 10 Jan 26 02:48 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:47:57
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:47:57.263316  217448 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:47:57.263510  217448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:47:57.263536  217448 out.go:374] Setting ErrFile to fd 2...
	I0110 02:47:57.263557  217448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:47:57.263981  217448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:47:57.264472  217448 out.go:368] Setting JSON to false
	I0110 02:47:57.265502  217448 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5427,"bootTime":1768007851,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:47:57.265567  217448 start.go:143] virtualization:  
	I0110 02:47:57.268705  217448 out.go:179] * [no-preload-676905] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:47:57.272500  217448 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:47:57.272569  217448 notify.go:221] Checking for updates...
	I0110 02:47:57.278293  217448 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:47:57.281168  217448 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:47:57.284043  217448 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:47:57.287000  217448 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:47:57.289882  217448 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:47:57.293141  217448 config.go:182] Loaded profile config "no-preload-676905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:47:57.293780  217448 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:47:57.323990  217448 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:47:57.324118  217448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:47:57.375338  217448 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:47:57.3658075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:47:57.375435  217448 docker.go:319] overlay module found
	I0110 02:47:57.378493  217448 out.go:179] * Using the docker driver based on existing profile
	I0110 02:47:57.381261  217448 start.go:309] selected driver: docker
	I0110 02:47:57.381281  217448 start.go:928] validating driver "docker" against &{Name:no-preload-676905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-676905 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:47:57.381369  217448 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:47:57.382058  217448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:47:57.455211  217448 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:47:57.443136408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:47:57.455533  217448 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:47:57.455571  217448 cni.go:84] Creating CNI manager for ""
	I0110 02:47:57.455630  217448 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:47:57.455669  217448 start.go:353] cluster config:
	{Name:no-preload-676905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-676905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:47:57.459035  217448 out.go:179] * Starting "no-preload-676905" primary control-plane node in "no-preload-676905" cluster
	I0110 02:47:57.461914  217448 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:47:57.464941  217448 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:47:57.467765  217448 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:47:57.467915  217448 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/config.json ...
	I0110 02:47:57.468285  217448 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:47:57.468247  217448 cache.go:107] acquiring lock: {Name:mkdf2b70dc3bfb0100a8d957c112ff6d60b533f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:47:57.468554  217448 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0110 02:47:57.468573  217448 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 351.671µs
	I0110 02:47:57.468587  217448 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0110 02:47:57.468604  217448 cache.go:107] acquiring lock: {Name:mk335c7d6e6cec745da4e01893ab73b038bcc37b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:47:57.468641  217448 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I0110 02:47:57.468651  217448 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0" took 50.009µs
	I0110 02:47:57.468657  217448 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I0110 02:47:57.468667  217448 cache.go:107] acquiring lock: {Name:mked65ab4ffae9cf085f87a9b484648d81831c67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:47:57.468697  217448 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I0110 02:47:57.468707  217448 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0" took 41.164µs
	I0110 02:47:57.468713  217448 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I0110 02:47:57.468722  217448 cache.go:107] acquiring lock: {Name:mkd95889d95a369bd71dc1a2761730b686349d74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:47:57.468752  217448 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I0110 02:47:57.468761  217448 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0" took 39.884µs
	I0110 02:47:57.468767  217448 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I0110 02:47:57.468776  217448 cache.go:107] acquiring lock: {Name:mk308c14dc1f570c027c3dfa4b755b4007e7f2d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:47:57.468806  217448 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I0110 02:47:57.468811  217448 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0" took 36.028µs
	I0110 02:47:57.468816  217448 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I0110 02:47:57.468827  217448 cache.go:107] acquiring lock: {Name:mk8489c7600ecf98e77b2d0fd473a4d98a759726 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:47:57.468860  217448 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 exists
	I0110 02:47:57.468869  217448 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0" took 43.585µs
	I0110 02:47:57.468875  217448 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I0110 02:47:57.468884  217448 cache.go:107] acquiring lock: {Name:mk712a03fba9f53486bb85d78a3ef35c15cedfe2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:47:57.468915  217448 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I0110 02:47:57.468924  217448 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 40.82µs
	I0110 02:47:57.468930  217448 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I0110 02:47:57.468959  217448 cache.go:107] acquiring lock: {Name:mk321022d40fb1eff3edb501792389e1ccf9fc85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:47:57.468991  217448 cache.go:115] /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I0110 02:47:57.469000  217448 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1" took 42.625µs
	I0110 02:47:57.469006  217448 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I0110 02:47:57.469012  217448 cache.go:87] Successfully saved all images to host disk.
	I0110 02:47:57.487702  217448 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:47:57.487719  217448 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:47:57.487733  217448 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:47:57.487758  217448 start.go:360] acquireMachinesLock for no-preload-676905: {Name:mk2632012d0afb769f32ccada6003bc8dbc8f0e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:47:57.487895  217448 start.go:364] duration metric: took 122.114µs to acquireMachinesLock for "no-preload-676905"
	I0110 02:47:57.487918  217448 start.go:96] Skipping create...Using existing machine configuration
	I0110 02:47:57.487924  217448 fix.go:54] fixHost starting: 
	I0110 02:47:57.488181  217448 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Status}}
	I0110 02:47:57.505038  217448 fix.go:112] recreateIfNeeded on no-preload-676905: state=Stopped err=<nil>
	W0110 02:47:57.505070  217448 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 02:47:57.508413  217448 out.go:252] * Restarting existing docker container for "no-preload-676905" ...
	I0110 02:47:57.508512  217448 cli_runner.go:164] Run: docker start no-preload-676905
	I0110 02:47:57.760354  217448 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Status}}
	I0110 02:47:57.783165  217448 kic.go:430] container "no-preload-676905" state is running.
	I0110 02:47:57.783555  217448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-676905
	I0110 02:47:57.809744  217448 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/config.json ...
	I0110 02:47:57.809957  217448 machine.go:94] provisionDockerMachine start ...
	I0110 02:47:57.810016  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:47:57.833039  217448 main.go:144] libmachine: Using SSH client type: native
	I0110 02:47:57.833357  217448 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I0110 02:47:57.833367  217448 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:47:57.834178  217448 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 02:48:00.983348  217448 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-676905
	
	I0110 02:48:00.983373  217448 ubuntu.go:182] provisioning hostname "no-preload-676905"
	I0110 02:48:00.983446  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:01.001079  217448 main.go:144] libmachine: Using SSH client type: native
	I0110 02:48:01.001404  217448 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I0110 02:48:01.001425  217448 main.go:144] libmachine: About to run SSH command:
	sudo hostname no-preload-676905 && echo "no-preload-676905" | sudo tee /etc/hostname
	I0110 02:48:01.159668  217448 main.go:144] libmachine: SSH cmd err, output: <nil>: no-preload-676905
	
	I0110 02:48:01.159755  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:01.186779  217448 main.go:144] libmachine: Using SSH client type: native
	I0110 02:48:01.187137  217448 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I0110 02:48:01.187154  217448 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-676905' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-676905/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-676905' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:48:01.349274  217448 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:48:01.349326  217448 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 02:48:01.349363  217448 ubuntu.go:190] setting up certificates
	I0110 02:48:01.349382  217448 provision.go:84] configureAuth start
	I0110 02:48:01.349522  217448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-676905
	I0110 02:48:01.368613  217448 provision.go:143] copyHostCerts
	I0110 02:48:01.368692  217448 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem, removing ...
	I0110 02:48:01.368714  217448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:48:01.368800  217448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 02:48:01.368950  217448 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem, removing ...
	I0110 02:48:01.368963  217448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:48:01.368993  217448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 02:48:01.369064  217448 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem, removing ...
	I0110 02:48:01.369074  217448 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:48:01.369099  217448 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 02:48:01.369165  217448 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.no-preload-676905 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-676905]
	I0110 02:48:01.486131  217448 provision.go:177] copyRemoteCerts
	I0110 02:48:01.486231  217448 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:48:01.486290  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:01.504208  217448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:48:01.613416  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:48:01.631479  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 02:48:01.648624  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:48:01.666546  217448 provision.go:87] duration metric: took 317.139953ms to configureAuth
	I0110 02:48:01.666572  217448 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:48:01.666779  217448 config.go:182] Loaded profile config "no-preload-676905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:48:01.666884  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:01.684254  217448 main.go:144] libmachine: Using SSH client type: native
	I0110 02:48:01.684584  217448 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I0110 02:48:01.684604  217448 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:48:02.053914  217448 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:48:02.053959  217448 machine.go:97] duration metric: took 4.243991003s to provisionDockerMachine
	I0110 02:48:02.053971  217448 start.go:293] postStartSetup for "no-preload-676905" (driver="docker")
	I0110 02:48:02.053983  217448 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:48:02.054067  217448 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:48:02.054135  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:02.080871  217448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:48:02.183769  217448 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:48:02.187113  217448 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:48:02.187143  217448 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:48:02.187154  217448 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:48:02.187209  217448 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:48:02.187292  217448 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:48:02.187396  217448 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:48:02.195320  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:48:02.213827  217448 start.go:296] duration metric: took 159.841212ms for postStartSetup
	I0110 02:48:02.213923  217448 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:48:02.213964  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:02.232758  217448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:48:02.332891  217448 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:48:02.337416  217448 fix.go:56] duration metric: took 4.849486026s for fixHost
	I0110 02:48:02.337439  217448 start.go:83] releasing machines lock for "no-preload-676905", held for 4.849532695s
	I0110 02:48:02.337507  217448 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-676905
	I0110 02:48:02.354277  217448 ssh_runner.go:195] Run: cat /version.json
	I0110 02:48:02.354325  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:02.354672  217448 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:48:02.354732  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:02.375320  217448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:48:02.376183  217448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:48:02.576289  217448 ssh_runner.go:195] Run: systemctl --version
	I0110 02:48:02.582765  217448 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:48:02.618217  217448 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:48:02.622377  217448 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:48:02.622449  217448 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:48:02.630240  217448 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:48:02.630266  217448 start.go:496] detecting cgroup driver to use...
	I0110 02:48:02.630296  217448 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:48:02.630352  217448 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:48:02.645549  217448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:48:02.659531  217448 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:48:02.659590  217448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:48:02.676330  217448 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:48:02.690833  217448 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:48:02.814271  217448 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:48:02.924511  217448 docker.go:234] disabling docker service ...
	I0110 02:48:02.924573  217448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:48:02.939602  217448 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:48:02.952499  217448 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:48:03.065076  217448 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:48:03.175511  217448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:48:03.188649  217448 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:48:03.203160  217448 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:48:03.203299  217448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:48:03.212331  217448 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:48:03.212428  217448 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:48:03.221610  217448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:48:03.230580  217448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:48:03.239700  217448 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:48:03.247572  217448 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:48:03.256486  217448 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:48:03.266023  217448 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:48:03.275000  217448 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:48:03.282924  217448 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:48:03.290492  217448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:48:03.409253  217448 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:48:03.598934  217448 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:48:03.599038  217448 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:48:03.603372  217448 start.go:574] Will wait 60s for crictl version
	I0110 02:48:03.603461  217448 ssh_runner.go:195] Run: which crictl
	I0110 02:48:03.607014  217448 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:48:03.631193  217448 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:48:03.631292  217448 ssh_runner.go:195] Run: crio --version
	I0110 02:48:03.660321  217448 ssh_runner.go:195] Run: crio --version
	I0110 02:48:03.694316  217448 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:48:03.697339  217448 cli_runner.go:164] Run: docker network inspect no-preload-676905 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:48:03.713520  217448 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:48:03.717394  217448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:48:03.726925  217448 kubeadm.go:884] updating cluster {Name:no-preload-676905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-676905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:48:03.727035  217448 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:48:03.727084  217448 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:48:03.761764  217448 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:48:03.761788  217448 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:48:03.761796  217448 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 02:48:03.761891  217448 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-676905 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:no-preload-676905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:48:03.761970  217448 ssh_runner.go:195] Run: crio config
	I0110 02:48:03.833600  217448 cni.go:84] Creating CNI manager for ""
	I0110 02:48:03.833625  217448 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:48:03.833640  217448 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:48:03.833661  217448 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-676905 NodeName:no-preload-676905 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:48:03.833780  217448 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-676905"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:48:03.833859  217448 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:48:03.841328  217448 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:48:03.841401  217448 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:48:03.849611  217448 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 02:48:03.862219  217448 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:48:03.874507  217448 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2234 bytes)
	I0110 02:48:03.886955  217448 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:48:03.890625  217448 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:48:03.900051  217448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:48:04.014846  217448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:48:04.036228  217448 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905 for IP: 192.168.76.2
	I0110 02:48:04.036300  217448 certs.go:195] generating shared ca certs ...
	I0110 02:48:04.036329  217448 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:48:04.036517  217448 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:48:04.036595  217448 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:48:04.036634  217448 certs.go:257] generating profile certs ...
	I0110 02:48:04.036770  217448 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.key
	I0110 02:48:04.036900  217448 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/apiserver.key.9031fc60
	I0110 02:48:04.036996  217448 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/proxy-client.key
	I0110 02:48:04.037158  217448 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:48:04.037216  217448 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:48:04.037242  217448 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:48:04.037302  217448 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:48:04.037367  217448 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:48:04.037420  217448 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:48:04.037525  217448 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:48:04.038173  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:48:04.055833  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:48:04.074084  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:48:04.092187  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:48:04.110857  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 02:48:04.127709  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 02:48:04.144401  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:48:04.169299  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 02:48:04.209511  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:48:04.236860  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:48:04.264773  217448 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:48:04.285085  217448 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:48:04.298451  217448 ssh_runner.go:195] Run: openssl version
	I0110 02:48:04.305608  217448 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:48:04.313349  217448 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:48:04.321024  217448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:48:04.324798  217448 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:48:04.324912  217448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:48:04.366756  217448 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:48:04.374107  217448 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:48:04.381237  217448 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:48:04.389206  217448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:48:04.392683  217448 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:48:04.392743  217448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:48:04.438525  217448 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:48:04.445758  217448 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:48:04.452764  217448 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:48:04.460062  217448 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:48:04.466463  217448 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:48:04.466534  217448 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:48:04.508659  217448 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:48:04.515844  217448 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:48:04.519416  217448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:48:04.560244  217448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:48:04.601329  217448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:48:04.642136  217448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:48:04.683715  217448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:48:04.728721  217448 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:48:04.775453  217448 kubeadm.go:401] StartCluster: {Name:no-preload-676905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-676905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:48:04.775603  217448 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:48:04.775715  217448 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:48:04.843372  217448 cri.go:96] found id: ""
	I0110 02:48:04.843486  217448 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:48:04.852517  217448 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:48:04.852586  217448 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:48:04.852669  217448 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:48:04.865524  217448 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:48:04.866016  217448 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-676905" does not appear in /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:48:04.866186  217448 kubeconfig.go:62] /home/jenkins/minikube-integration/22414-2353/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-676905" cluster setting kubeconfig missing "no-preload-676905" context setting]
	I0110 02:48:04.866514  217448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:48:04.869237  217448 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:48:04.877673  217448 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 02:48:04.877705  217448 kubeadm.go:602] duration metric: took 25.101499ms to restartPrimaryControlPlane
	I0110 02:48:04.877715  217448 kubeadm.go:403] duration metric: took 102.273646ms to StartCluster
	I0110 02:48:04.877729  217448 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:48:04.877803  217448 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:48:04.878531  217448 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:48:04.878996  217448 config.go:182] Loaded profile config "no-preload-676905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:48:04.879048  217448 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:48:04.879116  217448 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:48:04.879443  217448 addons.go:70] Setting storage-provisioner=true in profile "no-preload-676905"
	I0110 02:48:04.879459  217448 addons.go:239] Setting addon storage-provisioner=true in "no-preload-676905"
	W0110 02:48:04.879474  217448 addons.go:248] addon storage-provisioner should already be in state true
	I0110 02:48:04.879511  217448 host.go:66] Checking if "no-preload-676905" exists ...
	I0110 02:48:04.879758  217448 addons.go:70] Setting dashboard=true in profile "no-preload-676905"
	I0110 02:48:04.879889  217448 addons.go:239] Setting addon dashboard=true in "no-preload-676905"
	W0110 02:48:04.879925  217448 addons.go:248] addon dashboard should already be in state true
	I0110 02:48:04.879964  217448 host.go:66] Checking if "no-preload-676905" exists ...
	I0110 02:48:04.880115  217448 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Status}}
	I0110 02:48:04.880507  217448 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Status}}
	I0110 02:48:04.882729  217448 addons.go:70] Setting default-storageclass=true in profile "no-preload-676905"
	I0110 02:48:04.882753  217448 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-676905"
	I0110 02:48:04.883094  217448 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Status}}
	I0110 02:48:04.885298  217448 out.go:179] * Verifying Kubernetes components...
	I0110 02:48:04.888698  217448 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:48:04.939279  217448 addons.go:239] Setting addon default-storageclass=true in "no-preload-676905"
	W0110 02:48:04.939302  217448 addons.go:248] addon default-storageclass should already be in state true
	I0110 02:48:04.939326  217448 host.go:66] Checking if "no-preload-676905" exists ...
	I0110 02:48:04.939724  217448 cli_runner.go:164] Run: docker container inspect no-preload-676905 --format={{.State.Status}}
	I0110 02:48:04.950331  217448 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 02:48:04.950402  217448 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:48:04.954642  217448 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 02:48:04.954753  217448 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:48:04.954768  217448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:48:04.954832  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:04.958452  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 02:48:04.958478  217448 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 02:48:04.958553  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:05.003935  217448 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:48:05.003957  217448 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:48:05.004017  217448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-676905
	I0110 02:48:05.014631  217448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:48:05.030541  217448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:48:05.035098  217448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/no-preload-676905/id_rsa Username:docker}
	I0110 02:48:05.265198  217448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:48:05.272355  217448 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:48:05.304154  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 02:48:05.304179  217448 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 02:48:05.384642  217448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:48:05.396197  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 02:48:05.396224  217448 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 02:48:05.470425  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 02:48:05.470447  217448 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 02:48:05.548628  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 02:48:05.548651  217448 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 02:48:05.573085  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 02:48:05.573108  217448 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 02:48:05.592802  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 02:48:05.592826  217448 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 02:48:05.617599  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 02:48:05.617623  217448 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 02:48:05.631905  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 02:48:05.631930  217448 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 02:48:05.653961  217448 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:48:05.653987  217448 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 02:48:05.680573  217448 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:48:10.528668  217448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.263381253s)
	I0110 02:48:10.528777  217448 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.256402741s)
	I0110 02:48:10.528837  217448 node_ready.go:35] waiting up to 6m0s for node "no-preload-676905" to be "Ready" ...
	I0110 02:48:10.529207  217448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.144540329s)
	I0110 02:48:10.529353  217448 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.848748963s)
	I0110 02:48:10.532856  217448 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-676905 addons enable metrics-server
	
	I0110 02:48:10.547896  217448 node_ready.go:49] node "no-preload-676905" is "Ready"
	I0110 02:48:10.547963  217448 node_ready.go:38] duration metric: took 19.086177ms for node "no-preload-676905" to be "Ready" ...
	I0110 02:48:10.547992  217448 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:48:10.548079  217448 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:48:10.559312  217448 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 02:48:10.562176  217448 addons.go:530] duration metric: took 5.683056066s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 02:48:10.563420  217448 api_server.go:72] duration metric: took 5.684341415s to wait for apiserver process to appear ...
	I0110 02:48:10.563472  217448 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:48:10.563505  217448 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:48:10.572962  217448 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:48:10.574110  217448 api_server.go:141] control plane version: v1.35.0
	I0110 02:48:10.574166  217448 api_server.go:131] duration metric: took 10.673052ms to wait for apiserver health ...
	I0110 02:48:10.574189  217448 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:48:10.577455  217448 system_pods.go:59] 8 kube-system pods found
	I0110 02:48:10.577496  217448 system_pods.go:61] "coredns-7d764666f9-v67dz" [d212f09c-4573-4e47-ad15-50c9fdfeecd6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:48:10.577508  217448 system_pods.go:61] "etcd-no-preload-676905" [057fbf4d-96fb-423a-8d97-26392378f6a2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:48:10.577515  217448 system_pods.go:61] "kindnet-tsk2v" [d1006bf9-a95f-4260-9048-a78402602ff2] Running
	I0110 02:48:10.577523  217448 system_pods.go:61] "kube-apiserver-no-preload-676905" [9f7e4ecd-63b8-42f7-8ef7-a4ce47c14578] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:48:10.577533  217448 system_pods.go:61] "kube-controller-manager-no-preload-676905" [2baf8a7d-876b-4afe-a381-56e08abc7cc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:48:10.577541  217448 system_pods.go:61] "kube-proxy-r74hc" [0b78d574-77fa-4ec1-b986-d412d22f6a13] Running
	I0110 02:48:10.577548  217448 system_pods.go:61] "kube-scheduler-no-preload-676905" [71e28fec-7176-4dd8-89e0-77b4b1637652] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:48:10.577557  217448 system_pods.go:61] "storage-provisioner" [2867de9b-b16f-4cff-b553-1fb2f52db72b] Running
	I0110 02:48:10.577563  217448 system_pods.go:74] duration metric: took 3.342844ms to wait for pod list to return data ...
	I0110 02:48:10.577571  217448 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:48:10.580372  217448 default_sa.go:45] found service account: "default"
	I0110 02:48:10.580397  217448 default_sa.go:55] duration metric: took 2.817295ms for default service account to be created ...
	I0110 02:48:10.580407  217448 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:48:10.583350  217448 system_pods.go:86] 8 kube-system pods found
	I0110 02:48:10.583387  217448 system_pods.go:89] "coredns-7d764666f9-v67dz" [d212f09c-4573-4e47-ad15-50c9fdfeecd6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:48:10.583397  217448 system_pods.go:89] "etcd-no-preload-676905" [057fbf4d-96fb-423a-8d97-26392378f6a2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:48:10.583403  217448 system_pods.go:89] "kindnet-tsk2v" [d1006bf9-a95f-4260-9048-a78402602ff2] Running
	I0110 02:48:10.583409  217448 system_pods.go:89] "kube-apiserver-no-preload-676905" [9f7e4ecd-63b8-42f7-8ef7-a4ce47c14578] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:48:10.583422  217448 system_pods.go:89] "kube-controller-manager-no-preload-676905" [2baf8a7d-876b-4afe-a381-56e08abc7cc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:48:10.583429  217448 system_pods.go:89] "kube-proxy-r74hc" [0b78d574-77fa-4ec1-b986-d412d22f6a13] Running
	I0110 02:48:10.583438  217448 system_pods.go:89] "kube-scheduler-no-preload-676905" [71e28fec-7176-4dd8-89e0-77b4b1637652] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:48:10.583443  217448 system_pods.go:89] "storage-provisioner" [2867de9b-b16f-4cff-b553-1fb2f52db72b] Running
	I0110 02:48:10.583453  217448 system_pods.go:126] duration metric: took 3.040362ms to wait for k8s-apps to be running ...
	I0110 02:48:10.583464  217448 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:48:10.583515  217448 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:48:10.598726  217448 system_svc.go:56] duration metric: took 15.254107ms WaitForService to wait for kubelet
	I0110 02:48:10.598756  217448 kubeadm.go:587] duration metric: took 5.719678497s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:48:10.598773  217448 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:48:10.602655  217448 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:48:10.602730  217448 node_conditions.go:123] node cpu capacity is 2
	I0110 02:48:10.602759  217448 node_conditions.go:105] duration metric: took 3.978724ms to run NodePressure ...
	I0110 02:48:10.602788  217448 start.go:242] waiting for startup goroutines ...
	I0110 02:48:10.602826  217448 start.go:247] waiting for cluster config update ...
	I0110 02:48:10.602850  217448 start.go:256] writing updated cluster config ...
	I0110 02:48:10.603171  217448 ssh_runner.go:195] Run: rm -f paused
	I0110 02:48:10.607990  217448 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:48:10.612672  217448 pod_ready.go:83] waiting for pod "coredns-7d764666f9-v67dz" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 02:48:12.639701  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:15.119461  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:17.623601  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:20.119480  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:22.618312  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:25.118867  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:27.618395  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:29.618516  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:32.118171  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:34.617815  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:36.618190  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:39.117656  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	W0110 02:48:41.118337  217448 pod_ready.go:104] pod "coredns-7d764666f9-v67dz" is not "Ready", error: <nil>
	I0110 02:48:43.121193  217448 pod_ready.go:94] pod "coredns-7d764666f9-v67dz" is "Ready"
	I0110 02:48:43.121220  217448 pod_ready.go:86] duration metric: took 32.50852248s for pod "coredns-7d764666f9-v67dz" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:43.128158  217448 pod_ready.go:83] waiting for pod "etcd-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:43.132783  217448 pod_ready.go:94] pod "etcd-no-preload-676905" is "Ready"
	I0110 02:48:43.132811  217448 pod_ready.go:86] duration metric: took 4.626469ms for pod "etcd-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:43.135423  217448 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:43.139707  217448 pod_ready.go:94] pod "kube-apiserver-no-preload-676905" is "Ready"
	I0110 02:48:43.139731  217448 pod_ready.go:86] duration metric: took 4.283644ms for pod "kube-apiserver-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:43.141855  217448 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:43.316815  217448 pod_ready.go:94] pod "kube-controller-manager-no-preload-676905" is "Ready"
	I0110 02:48:43.316848  217448 pod_ready.go:86] duration metric: took 174.970319ms for pod "kube-controller-manager-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:43.517346  217448 pod_ready.go:83] waiting for pod "kube-proxy-r74hc" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:43.916413  217448 pod_ready.go:94] pod "kube-proxy-r74hc" is "Ready"
	I0110 02:48:43.916450  217448 pod_ready.go:86] duration metric: took 399.075477ms for pod "kube-proxy-r74hc" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:44.116583  217448 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:44.517355  217448 pod_ready.go:94] pod "kube-scheduler-no-preload-676905" is "Ready"
	I0110 02:48:44.517382  217448 pod_ready.go:86] duration metric: took 400.773648ms for pod "kube-scheduler-no-preload-676905" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:48:44.517395  217448 pod_ready.go:40] duration metric: took 33.909369497s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:48:44.571543  217448 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 02:48:44.574929  217448 out.go:203] 
	W0110 02:48:44.577772  217448 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 02:48:44.580829  217448 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:48:44.583727  217448 out.go:179] * Done! kubectl is now configured to use "no-preload-676905" cluster and "default" namespace by default
	I0110 02:48:57.060863  190834 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001123817s
	I0110 02:48:57.065379  190834 kubeadm.go:319] 
	I0110 02:48:57.065521  190834 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I0110 02:48:57.065604  190834 kubeadm.go:319] 	- The kubelet is not running
	I0110 02:48:57.065812  190834 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0110 02:48:57.065826  190834 kubeadm.go:319] 
	I0110 02:48:57.066022  190834 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0110 02:48:57.066086  190834 kubeadm.go:319] 	- 'systemctl status kubelet'
	I0110 02:48:57.066161  190834 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I0110 02:48:57.066172  190834 kubeadm.go:319] 
	I0110 02:48:57.067236  190834 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:48:57.068005  190834 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:48:57.068206  190834 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:48:57.068622  190834 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I0110 02:48:57.068635  190834 kubeadm.go:319] 
	I0110 02:48:57.068751  190834 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I0110 02:48:57.068816  190834 kubeadm.go:403] duration metric: took 8m8.180913411s to StartCluster
	I0110 02:48:57.068867  190834 cri.go:61] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0110 02:48:57.068936  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-apiserver
	I0110 02:48:57.098192  190834 cri.go:96] found id: ""
	I0110 02:48:57.098234  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.098243  190834 logs.go:284] No container was found matching "kube-apiserver"
	I0110 02:48:57.098252  190834 cri.go:61] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0110 02:48:57.098315  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=etcd
	I0110 02:48:57.125219  190834 cri.go:96] found id: ""
	I0110 02:48:57.125247  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.125261  190834 logs.go:284] No container was found matching "etcd"
	I0110 02:48:57.125268  190834 cri.go:61] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0110 02:48:57.125342  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=coredns
	I0110 02:48:57.150139  190834 cri.go:96] found id: ""
	I0110 02:48:57.150167  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.150180  190834 logs.go:284] No container was found matching "coredns"
	I0110 02:48:57.150188  190834 cri.go:61] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0110 02:48:57.150254  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-scheduler
	I0110 02:48:57.175259  190834 cri.go:96] found id: ""
	I0110 02:48:57.175284  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.175294  190834 logs.go:284] No container was found matching "kube-scheduler"
	I0110 02:48:57.175300  190834 cri.go:61] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0110 02:48:57.175355  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-proxy
	I0110 02:48:57.200932  190834 cri.go:96] found id: ""
	I0110 02:48:57.200955  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.200965  190834 logs.go:284] No container was found matching "kube-proxy"
	I0110 02:48:57.200988  190834 cri.go:61] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0110 02:48:57.201068  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kube-controller-manager
	I0110 02:48:57.227348  190834 cri.go:96] found id: ""
	I0110 02:48:57.227374  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.227383  190834 logs.go:284] No container was found matching "kube-controller-manager"
	I0110 02:48:57.227390  190834 cri.go:61] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0110 02:48:57.227445  190834 ssh_runner.go:195] Run: sudo crictl --timeout=10s ps -a --quiet --name=kindnet
	I0110 02:48:57.253778  190834 cri.go:96] found id: ""
	I0110 02:48:57.253801  190834 logs.go:282] 0 containers: []
	W0110 02:48:57.253810  190834 logs.go:284] No container was found matching "kindnet"
	I0110 02:48:57.253847  190834 logs.go:123] Gathering logs for container status ...
	I0110 02:48:57.253865  190834 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0110 02:48:57.291511  190834 logs.go:123] Gathering logs for kubelet ...
	I0110 02:48:57.291541  190834 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0110 02:48:57.388786  190834 logs.go:123] Gathering logs for dmesg ...
	I0110 02:48:57.388823  190834 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0110 02:48:57.407987  190834 logs.go:123] Gathering logs for describe nodes ...
	I0110 02:48:57.408116  190834 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0110 02:48:57.484131  190834 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0110 02:48:57.474928    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.475936    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.477614    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.478171    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.479931    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E0110 02:48:57.474928    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.475936    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.477614    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.478171    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E0110 02:48:57.479931    4919 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0110 02:48:57.484205  190834 logs.go:123] Gathering logs for CRI-O ...
	I0110 02:48:57.484232  190834 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0110 02:48:57.522887  190834 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001123817s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W0110 02:48:57.522988  190834 out.go:285] * 
	W0110 02:48:57.523068  190834 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001123817s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:48:57.523136  190834 out.go:285] * 
	W0110 02:48:57.523415  190834 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:48:57.529625  190834 out.go:203] 
	W0110 02:48:57.533667  190834 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001123817s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W0110 02:48:57.533807  190834 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0110 02:48:57.533861  190834 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0110 02:48:57.537476  190834 out.go:203] 
	
	
	==> CRI-O <==
	Jan 10 02:48:40 no-preload-676905 crio[663]: time="2026-01-10T02:48:40.499194541Z" level=info msg="Started container" PID=1680 containerID=92b43250d4836d6795421decd73ba4fd8ee1fd121d721eb0db2e6514bbd285c8 description=kube-system/storage-provisioner/storage-provisioner id=512b0672-5d94-4d0c-84a9-841cccd4f481 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f3ed95c04dd4e7e20d24537ee498278d578e031bb57b72cc84c234d566d67548
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.130055166Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.130495868Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.13487898Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.134915812Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.139336773Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.139371685Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.143556689Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.143592635Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.143623215Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.147739954Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.147775605Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.298796683Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=352878fd-2069-40b8-a128-abfc871c263b name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.302010521Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=354a7cae-28d8-4fe8-88a7-3c568a0de01c name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.305272449Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk/dashboard-metrics-scraper" id=775dedad-3f70-4d32-bcd5-bcfe4b10f9f3 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.30537972Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.315553209Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.316262491Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.335909213Z" level=info msg="Created container 9ca2cd8a5c9b3072491c22f97f3f70d330eac60ee6c190558e871b3e7c11e63b: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk/dashboard-metrics-scraper" id=775dedad-3f70-4d32-bcd5-bcfe4b10f9f3 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.337081374Z" level=info msg="Starting container: 9ca2cd8a5c9b3072491c22f97f3f70d330eac60ee6c190558e871b3e7c11e63b" id=3c79b305-e1cc-4869-86b6-62ca52052a04 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.340535125Z" level=info msg="Started container" PID=1770 containerID=9ca2cd8a5c9b3072491c22f97f3f70d330eac60ee6c190558e871b3e7c11e63b description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk/dashboard-metrics-scraper id=3c79b305-e1cc-4869-86b6-62ca52052a04 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f99913708474a6ab4e1be10be337b13b95f278ea8cf371aa9ebf9eb289a53483
	Jan 10 02:48:56 no-preload-676905 conmon[1768]: conmon 9ca2cd8a5c9b3072491c <ninfo>: container 1770 exited with status 1
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.512617092Z" level=info msg="Removing container: 54d02af0ab43c67b86a1069282916ed762848b8c601b8efc26fcfbf18c7aa2bc" id=2ef5a79f-bf7e-41ef-9ea8-5bbdc7e3629d name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.522927234Z" level=info msg="Error loading conmon cgroup of container 54d02af0ab43c67b86a1069282916ed762848b8c601b8efc26fcfbf18c7aa2bc: cgroup deleted" id=2ef5a79f-bf7e-41ef-9ea8-5bbdc7e3629d name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.53238825Z" level=info msg="Removed container 54d02af0ab43c67b86a1069282916ed762848b8c601b8efc26fcfbf18c7aa2bc: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk/dashboard-metrics-scraper" id=2ef5a79f-bf7e-41ef-9ea8-5bbdc7e3629d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9ca2cd8a5c9b3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 seconds ago       Exited              dashboard-metrics-scraper   3                   f99913708474a       dashboard-metrics-scraper-867fb5f87b-kjnrk   kubernetes-dashboard
	92b43250d4836       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           20 seconds ago      Running             storage-provisioner         2                   f3ed95c04dd4e       storage-provisioner                          kube-system
	1f6b2e2b36205       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago      Running             kubernetes-dashboard        0                   daf924e0d08a6       kubernetes-dashboard-b84665fb8-zvbxj         kubernetes-dashboard
	54ca813feadfc       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   563b7accf8622       busybox                                      default
	89055935644ac       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           50 seconds ago      Running             coredns                     1                   b990e7833cf1d       coredns-7d764666f9-v67dz                     kube-system
	068ee48cb5d42       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           50 seconds ago      Exited              storage-provisioner         1                   f3ed95c04dd4e       storage-provisioner                          kube-system
	27193b01d4873       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           50 seconds ago      Running             kube-proxy                  1                   f92502d421a6d       kube-proxy-r74hc                             kube-system
	93572ed1012d6       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           50 seconds ago      Running             kindnet-cni                 1                   18f7cd273abc8       kindnet-tsk2v                                kube-system
	6896022c3f268       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           55 seconds ago      Running             kube-apiserver              1                   af400886bf095       kube-apiserver-no-preload-676905             kube-system
	a7ff1e6390fa7       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           55 seconds ago      Running             etcd                        1                   2c7d639f71524       etcd-no-preload-676905                       kube-system
	d553ba4a25650       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           55 seconds ago      Running             kube-scheduler              1                   3d0a5185c4b70       kube-scheduler-no-preload-676905             kube-system
	e373472937baf       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           55 seconds ago      Running             kube-controller-manager     1                   9de6744774c99       kube-controller-manager-no-preload-676905    kube-system
	
	
	==> coredns [89055935644acadcf02b28ab8d6db16656e01c56a0338f89d963ab2396bdcd1c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:58354 - 36554 "HINFO IN 3253843191460191545.4344913309749210345. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012726782s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-676905
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-676905
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=no-preload-676905
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_47_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:47:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-676905
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:48:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:48:50 +0000   Sat, 10 Jan 2026 02:47:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:48:50 +0000   Sat, 10 Jan 2026 02:47:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:48:50 +0000   Sat, 10 Jan 2026 02:47:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:48:50 +0000   Sat, 10 Jan 2026 02:47:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-676905
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                f6842875-d9d6-4f29-b119-b957541c22e9
	  Boot ID:                    41f52236-76b9-4dc8-bfe3-121fbdeb9659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-7d764666f9-v67dz                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     105s
	  kube-system                 etcd-no-preload-676905                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         110s
	  kube-system                 kindnet-tsk2v                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-no-preload-676905              250m (12%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-no-preload-676905     200m (10%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-r74hc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-no-preload-676905              100m (5%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-kjnrk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-zvbxj          0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  106s  node-controller  Node no-preload-676905 event: Registered Node no-preload-676905 in Controller
	  Normal  RegisteredNode  48s   node-controller  Node no-preload-676905 event: Registered Node no-preload-676905 in Controller
	
	
	==> dmesg <==
	[ +27.765975] overlayfs: idmapped layers are currently not supported
	[Jan10 02:15] overlayfs: idmapped layers are currently not supported
	[Jan10 02:16] overlayfs: idmapped layers are currently not supported
	[Jan10 02:17] overlayfs: idmapped layers are currently not supported
	[Jan10 02:18] overlayfs: idmapped layers are currently not supported
	[ +23.212409] overlayfs: idmapped layers are currently not supported
	[  +9.866169] overlayfs: idmapped layers are currently not supported
	[Jan10 02:19] overlayfs: idmapped layers are currently not supported
	[Jan10 02:20] overlayfs: idmapped layers are currently not supported
	[ +35.960240] overlayfs: idmapped layers are currently not supported
	[Jan10 02:21] overlayfs: idmapped layers are currently not supported
	[Jan10 02:23] overlayfs: idmapped layers are currently not supported
	[Jan10 02:24] overlayfs: idmapped layers are currently not supported
	[Jan10 02:25] overlayfs: idmapped layers are currently not supported
	[Jan10 02:30] overlayfs: idmapped layers are currently not supported
	[Jan10 02:31] overlayfs: idmapped layers are currently not supported
	[Jan10 02:35] overlayfs: idmapped layers are currently not supported
	[Jan10 02:37] overlayfs: idmapped layers are currently not supported
	[Jan10 02:41] overlayfs: idmapped layers are currently not supported
	[ +37.534021] overlayfs: idmapped layers are currently not supported
	[Jan10 02:43] overlayfs: idmapped layers are currently not supported
	[Jan10 02:44] overlayfs: idmapped layers are currently not supported
	[Jan10 02:45] overlayfs: idmapped layers are currently not supported
	[Jan10 02:46] overlayfs: idmapped layers are currently not supported
	[Jan10 02:48] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a7ff1e6390fa775c123c587323c7e2f6cec3b4b2d846529ec2347776ff26d6f4] <==
	{"level":"info","ts":"2026-01-10T02:48:05.598494Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:48:05.598630Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T02:48:05.599446Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-10T02:48:05.604550Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T02:48:05.604588Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T02:48:05.604025Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:48:05.604626Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:48:06.331855Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:48:06.331982Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:48:06.332061Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:48:06.332108Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:48:06.332154Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:48:06.339842Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:48:06.339948Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:48:06.340003Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:48:06.340038Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:48:06.348013Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-676905 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:48:06.348123Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:48:06.348184Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:48:06.349315Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:48:06.353737Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T02:48:06.379832Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:48:06.379946Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:48:06.381954Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:48:06.395862Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 02:49:00 up  1:31,  0 user,  load average: 2.93, 2.30, 1.99
	Linux no-preload-676905 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [93572ed1012d65280df6c8e67b6f1922bebba09495f4249d9770c01186da9814] <==
	I0110 02:48:09.831710       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:48:09.832062       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 02:48:09.832200       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:48:09.832212       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:48:09.832221       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:48:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:48:10.125723       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:48:10.125744       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:48:10.125753       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:48:10.126070       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0110 02:48:40.224173       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0110 02:48:40.224173       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0110 02:48:40.224305       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0110 02:48:40.224403       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I0110 02:48:41.726452       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:48:41.726483       1 metrics.go:72] Registering metrics
	I0110 02:48:41.726534       1 controller.go:711] "Syncing nftables rules"
	I0110 02:48:50.125227       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:48:50.125267       1 main.go:301] handling current node
	I0110 02:49:00.141909       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:49:00.141948       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6896022c3f268165644bf92599f94c980310297bd558b3038a54e557be2f3059] <==
	I0110 02:48:09.118166       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:09.118241       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0110 02:48:09.118577       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 02:48:09.140544       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 02:48:09.149752       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:48:09.156963       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 02:48:09.171961       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:09.171983       1 policy_source.go:248] refreshing policies
	I0110 02:48:09.172107       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 02:48:09.172116       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 02:48:09.186841       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 02:48:09.225499       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E0110 02:48:09.250085       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 02:48:09.290263       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:48:09.649908       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:48:10.173545       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:48:10.221738       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:48:10.249275       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:48:10.258173       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:48:10.334625       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.24.11"}
	I0110 02:48:10.386321       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.65.211"}
	I0110 02:48:12.494031       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:48:12.494146       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:48:12.670750       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:48:12.797494       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e373472937baf1f075e16e1423aa5e5fbcd79cb428ae5d27b8fc181cd4d93dae] <==
	I0110 02:48:12.103229       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.103232       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.103241       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.103241       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.103252       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.103336       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.106171       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:48:12.113694       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.103713       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.101556       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.103922       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.101581       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.104003       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.104016       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.103531       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.103620       1 range_allocator.go:177] "Sending events to api server"
	I0110 02:48:12.138578       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0110 02:48:12.138627       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:48:12.138684       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.203288       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.203377       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:48:12.203394       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:48:12.206327       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.806207       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I0110 02:48:12.806412       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [27193b01d487358eda6303830a14176f18fda2cc8ccb2d5478cc50ca9111e0fa] <==
	I0110 02:48:10.038115       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:48:10.200826       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:48:10.301657       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:10.301689       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 02:48:10.301770       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:48:10.401323       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:48:10.401380       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:48:10.426645       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:48:10.428276       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:48:10.428305       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:48:10.456650       1 config.go:200] "Starting service config controller"
	I0110 02:48:10.456667       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:48:10.456702       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:48:10.456706       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:48:10.456737       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:48:10.456742       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:48:10.463112       1 config.go:309] "Starting node config controller"
	I0110 02:48:10.463129       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:48:10.463136       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:48:10.557294       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:48:10.557462       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 02:48:10.557479       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d553ba4a2565012123c50e31a0791c4773b9cf73ab9ea054e4d6cac50e536f76] <==
	I0110 02:48:07.648849       1 serving.go:386] Generated self-signed cert in-memory
	W0110 02:48:08.754513       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 02:48:08.754539       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 02:48:08.754549       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 02:48:08.754564       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 02:48:08.939218       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 02:48:08.939248       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:48:08.950742       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:48:08.950772       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:48:08.964013       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 02:48:08.964126       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 02:48:09.151877       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:48:24 no-preload-676905 kubelet[783]: I0110 02:48:24.417974     783 scope.go:122] "RemoveContainer" containerID="86dc8adae2d4753385f851832f0ed29e1d47b0fed43e6b5c393c7d7115cb58a5"
	Jan 10 02:48:24 no-preload-676905 kubelet[783]: E0110 02:48:24.418204     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-kjnrk_kubernetes-dashboard(a045c7cb-7d1d-4de3-bd66-a14b6565b52d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" podUID="a045c7cb-7d1d-4de3-bd66-a14b6565b52d"
	Jan 10 02:48:29 no-preload-676905 kubelet[783]: E0110 02:48:29.644428     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" containerName="dashboard-metrics-scraper"
	Jan 10 02:48:29 no-preload-676905 kubelet[783]: I0110 02:48:29.644468     783 scope.go:122] "RemoveContainer" containerID="86dc8adae2d4753385f851832f0ed29e1d47b0fed43e6b5c393c7d7115cb58a5"
	Jan 10 02:48:29 no-preload-676905 kubelet[783]: E0110 02:48:29.644628     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-kjnrk_kubernetes-dashboard(a045c7cb-7d1d-4de3-bd66-a14b6565b52d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" podUID="a045c7cb-7d1d-4de3-bd66-a14b6565b52d"
	Jan 10 02:48:34 no-preload-676905 kubelet[783]: E0110 02:48:34.297999     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" containerName="dashboard-metrics-scraper"
	Jan 10 02:48:34 no-preload-676905 kubelet[783]: I0110 02:48:34.298495     783 scope.go:122] "RemoveContainer" containerID="86dc8adae2d4753385f851832f0ed29e1d47b0fed43e6b5c393c7d7115cb58a5"
	Jan 10 02:48:34 no-preload-676905 kubelet[783]: E0110 02:48:34.441883     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" containerName="dashboard-metrics-scraper"
	Jan 10 02:48:34 no-preload-676905 kubelet[783]: I0110 02:48:34.441694     783 scope.go:122] "RemoveContainer" containerID="86dc8adae2d4753385f851832f0ed29e1d47b0fed43e6b5c393c7d7115cb58a5"
	Jan 10 02:48:34 no-preload-676905 kubelet[783]: I0110 02:48:34.443761     783 scope.go:122] "RemoveContainer" containerID="54d02af0ab43c67b86a1069282916ed762848b8c601b8efc26fcfbf18c7aa2bc"
	Jan 10 02:48:34 no-preload-676905 kubelet[783]: E0110 02:48:34.444038     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-kjnrk_kubernetes-dashboard(a045c7cb-7d1d-4de3-bd66-a14b6565b52d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" podUID="a045c7cb-7d1d-4de3-bd66-a14b6565b52d"
	Jan 10 02:48:39 no-preload-676905 kubelet[783]: E0110 02:48:39.644199     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" containerName="dashboard-metrics-scraper"
	Jan 10 02:48:39 no-preload-676905 kubelet[783]: I0110 02:48:39.644263     783 scope.go:122] "RemoveContainer" containerID="54d02af0ab43c67b86a1069282916ed762848b8c601b8efc26fcfbf18c7aa2bc"
	Jan 10 02:48:39 no-preload-676905 kubelet[783]: E0110 02:48:39.644469     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-kjnrk_kubernetes-dashboard(a045c7cb-7d1d-4de3-bd66-a14b6565b52d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" podUID="a045c7cb-7d1d-4de3-bd66-a14b6565b52d"
	Jan 10 02:48:40 no-preload-676905 kubelet[783]: I0110 02:48:40.457502     783 scope.go:122] "RemoveContainer" containerID="068ee48cb5d42288b992f6422f0f6edf35a1579daa69d7eb3ffb9e08f9182b2a"
	Jan 10 02:48:42 no-preload-676905 kubelet[783]: E0110 02:48:42.662375     783 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-v67dz" containerName="coredns"
	Jan 10 02:48:56 no-preload-676905 kubelet[783]: E0110 02:48:56.298264     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" containerName="dashboard-metrics-scraper"
	Jan 10 02:48:56 no-preload-676905 kubelet[783]: I0110 02:48:56.298310     783 scope.go:122] "RemoveContainer" containerID="54d02af0ab43c67b86a1069282916ed762848b8c601b8efc26fcfbf18c7aa2bc"
	Jan 10 02:48:56 no-preload-676905 kubelet[783]: I0110 02:48:56.497605     783 scope.go:122] "RemoveContainer" containerID="54d02af0ab43c67b86a1069282916ed762848b8c601b8efc26fcfbf18c7aa2bc"
	Jan 10 02:48:56 no-preload-676905 kubelet[783]: E0110 02:48:56.498231     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" containerName="dashboard-metrics-scraper"
	Jan 10 02:48:56 no-preload-676905 kubelet[783]: I0110 02:48:56.498257     783 scope.go:122] "RemoveContainer" containerID="9ca2cd8a5c9b3072491c22f97f3f70d330eac60ee6c190558e871b3e7c11e63b"
	Jan 10 02:48:56 no-preload-676905 kubelet[783]: E0110 02:48:56.498777     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-kjnrk_kubernetes-dashboard(a045c7cb-7d1d-4de3-bd66-a14b6565b52d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" podUID="a045c7cb-7d1d-4de3-bd66-a14b6565b52d"
	Jan 10 02:48:56 no-preload-676905 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:48:56 no-preload-676905 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:48:56 no-preload-676905 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [1f6b2e2b36205985d05a8d8ccaa07bf33db63d212ad795c058c0b9fffacd012a] <==
	2026/01/10 02:48:17 Starting overwatch
	2026/01/10 02:48:17 Using namespace: kubernetes-dashboard
	2026/01/10 02:48:17 Using in-cluster config to connect to apiserver
	2026/01/10 02:48:17 Using secret token for csrf signing
	2026/01/10 02:48:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 02:48:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 02:48:17 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 02:48:17 Generating JWE encryption key
	2026/01/10 02:48:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 02:48:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 02:48:18 Initializing JWE encryption key from synchronized object
	2026/01/10 02:48:18 Creating in-cluster Sidecar client
	2026/01/10 02:48:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:48:18 Serving insecurely on HTTP port: 9090
	2026/01/10 02:48:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [068ee48cb5d42288b992f6422f0f6edf35a1579daa69d7eb3ffb9e08f9182b2a] <==
	I0110 02:48:09.982348       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 02:48:39.985512       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [92b43250d4836d6795421decd73ba4fd8ee1fd121d721eb0db2e6514bbd285c8] <==
	I0110 02:48:40.515001       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:48:40.530338       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:48:40.530505       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 02:48:40.532806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:48:43.988135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:48:48.248546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:48:51.846753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:48:54.900652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:48:57.923264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:48:57.927770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:48:57.928271       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:48:57.928470       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-676905_e1aa6446-9bbb-466d-8423-2bfa866cdf9a!
	I0110 02:48:57.929426       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d3b5d431-8073-4c52-ab3f-9b40b241d7ee", APIVersion:"v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-676905_e1aa6446-9bbb-466d-8423-2bfa866cdf9a became leader
	W0110 02:48:57.937364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:48:57.961405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:48:58.029142       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-676905_e1aa6446-9bbb-466d-8423-2bfa866cdf9a!
	W0110 02:48:59.967456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:48:59.973677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-676905 -n no-preload-676905
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-676905 -n no-preload-676905: exit status 2 (449.571399ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-676905 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-676905
helpers_test.go:244: (dbg) docker inspect no-preload-676905:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49",
	        "Created": "2026-01-10T02:46:39.759659544Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 217584,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:47:57.540923895Z",
	            "FinishedAt": "2026-01-10T02:47:56.739492001Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49/hostname",
	        "HostsPath": "/var/lib/docker/containers/edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49/hosts",
	        "LogPath": "/var/lib/docker/containers/edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49/edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49-json.log",
	        "Name": "/no-preload-676905",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-676905:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-676905",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "edb3b90bff05a0b60beee1473618b471499355017fcb496f3f3b9d44b8906d49",
	                "LowerDir": "/var/lib/docker/overlay2/fc94be4f76220c0d09795aa60f43389a06dd04e0a5955af0531cf368de6efc6b-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fc94be4f76220c0d09795aa60f43389a06dd04e0a5955af0531cf368de6efc6b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fc94be4f76220c0d09795aa60f43389a06dd04e0a5955af0531cf368de6efc6b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fc94be4f76220c0d09795aa60f43389a06dd04e0a5955af0531cf368de6efc6b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-676905",
	                "Source": "/var/lib/docker/volumes/no-preload-676905/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-676905",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-676905",
	                "name.minikube.sigs.k8s.io": "no-preload-676905",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e4620211dc620f778dba6ef6c137059b746f1b912e7f2c299a3b784b6fbeb1d7",
	            "SandboxKey": "/var/run/docker/netns/e4620211dc62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-676905": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:c9:9a:b5:30:41",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "146cb46c14407056c3f694e77394bc66aacde5ff5ac19837f4400799ed6e0ce7",
	                    "EndpointID": "7c77d219ae9db9edb9e29758faac68904f0b903f7d4bb33eb868d6e80b15c6a7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-676905",
	                        "edb3b90bff05"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-676905 -n no-preload-676905
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-676905 -n no-preload-676905: exit status 2 (464.685156ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-676905 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p no-preload-676905 logs -n 25: (1.607994378s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:42 UTC │ 10 Jan 26 02:43 UTC │
	│ image   │ old-k8s-version-736081 image list --format=json                                                                                                                                                                                               │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ pause   │ -p old-k8s-version-736081 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │                     │
	│ delete  │ -p old-k8s-version-736081                                                                                                                                                                                                                     │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ delete  │ -p old-k8s-version-736081                                                                                                                                                                                                                     │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ start   │ -p embed-certs-290628 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ addons  │ enable metrics-server -p embed-certs-290628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │                     │
	│ stop    │ -p embed-certs-290628 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:45 UTC │
	│ addons  │ enable dashboard -p embed-certs-290628 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:45 UTC │
	│ start   │ -p embed-certs-290628 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:46 UTC │
	│ image   │ embed-certs-290628 image list --format=json                                                                                                                                                                                                   │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ pause   │ -p embed-certs-290628 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │                     │
	│ delete  │ -p embed-certs-290628                                                                                                                                                                                                                         │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ delete  │ -p embed-certs-290628                                                                                                                                                                                                                         │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ delete  │ -p disable-driver-mounts-990753                                                                                                                                                                                                               │ disable-driver-mounts-990753 │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ start   │ -p no-preload-676905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:47 UTC │
	│ addons  │ enable metrics-server -p no-preload-676905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │                     │
	│ stop    │ -p no-preload-676905 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:47 UTC │
	│ addons  │ enable dashboard -p no-preload-676905 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:47 UTC │
	│ start   │ -p no-preload-676905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:48 UTC │
	│ image   │ no-preload-676905 image list --format=json                                                                                                                                                                                                    │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │ 10 Jan 26 02:48 UTC │
	│ pause   │ -p no-preload-676905 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │                     │
	│ ssh     │ force-systemd-flag-038359 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-038359    │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │ 10 Jan 26 02:48 UTC │
	│ delete  │ -p force-systemd-flag-038359                                                                                                                                                                                                                  │ force-systemd-flag-038359    │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │ 10 Jan 26 02:49 UTC │
	│ start   │ -p default-k8s-diff-port-403885 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-403885 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:49:02
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:49:02.381678  221603 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:49:02.381870  221603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:49:02.381891  221603 out.go:374] Setting ErrFile to fd 2...
	I0110 02:49:02.381911  221603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:49:02.382189  221603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:49:02.382595  221603 out.go:368] Setting JSON to false
	I0110 02:49:02.383489  221603 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5492,"bootTime":1768007851,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:49:02.383581  221603 start.go:143] virtualization:  
	I0110 02:49:02.389171  221603 out.go:179] * [default-k8s-diff-port-403885] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:49:02.392463  221603 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:49:02.392541  221603 notify.go:221] Checking for updates...
	I0110 02:49:02.396782  221603 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:49:02.400161  221603 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:49:02.403164  221603 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:49:02.406520  221603 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:49:02.409481  221603 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> CRI-O <==
	Jan 10 02:48:40 no-preload-676905 crio[663]: time="2026-01-10T02:48:40.499194541Z" level=info msg="Started container" PID=1680 containerID=92b43250d4836d6795421decd73ba4fd8ee1fd121d721eb0db2e6514bbd285c8 description=kube-system/storage-provisioner/storage-provisioner id=512b0672-5d94-4d0c-84a9-841cccd4f481 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f3ed95c04dd4e7e20d24537ee498278d578e031bb57b72cc84c234d566d67548
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.130055166Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.130495868Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.13487898Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.134915812Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.139336773Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.139371685Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.143556689Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.143592635Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.143623215Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.147739954Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:48:50 no-preload-676905 crio[663]: time="2026-01-10T02:48:50.147775605Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.298796683Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=352878fd-2069-40b8-a128-abfc871c263b name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.302010521Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=354a7cae-28d8-4fe8-88a7-3c568a0de01c name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.305272449Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk/dashboard-metrics-scraper" id=775dedad-3f70-4d32-bcd5-bcfe4b10f9f3 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.30537972Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.315553209Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.316262491Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.335909213Z" level=info msg="Created container 9ca2cd8a5c9b3072491c22f97f3f70d330eac60ee6c190558e871b3e7c11e63b: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk/dashboard-metrics-scraper" id=775dedad-3f70-4d32-bcd5-bcfe4b10f9f3 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.337081374Z" level=info msg="Starting container: 9ca2cd8a5c9b3072491c22f97f3f70d330eac60ee6c190558e871b3e7c11e63b" id=3c79b305-e1cc-4869-86b6-62ca52052a04 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.340535125Z" level=info msg="Started container" PID=1770 containerID=9ca2cd8a5c9b3072491c22f97f3f70d330eac60ee6c190558e871b3e7c11e63b description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk/dashboard-metrics-scraper id=3c79b305-e1cc-4869-86b6-62ca52052a04 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f99913708474a6ab4e1be10be337b13b95f278ea8cf371aa9ebf9eb289a53483
	Jan 10 02:48:56 no-preload-676905 conmon[1768]: conmon 9ca2cd8a5c9b3072491c <ninfo>: container 1770 exited with status 1
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.512617092Z" level=info msg="Removing container: 54d02af0ab43c67b86a1069282916ed762848b8c601b8efc26fcfbf18c7aa2bc" id=2ef5a79f-bf7e-41ef-9ea8-5bbdc7e3629d name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.522927234Z" level=info msg="Error loading conmon cgroup of container 54d02af0ab43c67b86a1069282916ed762848b8c601b8efc26fcfbf18c7aa2bc: cgroup deleted" id=2ef5a79f-bf7e-41ef-9ea8-5bbdc7e3629d name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:48:56 no-preload-676905 crio[663]: time="2026-01-10T02:48:56.53238825Z" level=info msg="Removed container 54d02af0ab43c67b86a1069282916ed762848b8c601b8efc26fcfbf18c7aa2bc: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk/dashboard-metrics-scraper" id=2ef5a79f-bf7e-41ef-9ea8-5bbdc7e3629d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9ca2cd8a5c9b3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago       Exited              dashboard-metrics-scraper   3                   f99913708474a       dashboard-metrics-scraper-867fb5f87b-kjnrk   kubernetes-dashboard
	92b43250d4836       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           22 seconds ago      Running             storage-provisioner         2                   f3ed95c04dd4e       storage-provisioner                          kube-system
	1f6b2e2b36205       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago      Running             kubernetes-dashboard        0                   daf924e0d08a6       kubernetes-dashboard-b84665fb8-zvbxj         kubernetes-dashboard
	54ca813feadfc       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago      Running             busybox                     1                   563b7accf8622       busybox                                      default
	89055935644ac       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           53 seconds ago      Running             coredns                     1                   b990e7833cf1d       coredns-7d764666f9-v67dz                     kube-system
	068ee48cb5d42       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           53 seconds ago      Exited              storage-provisioner         1                   f3ed95c04dd4e       storage-provisioner                          kube-system
	27193b01d4873       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           53 seconds ago      Running             kube-proxy                  1                   f92502d421a6d       kube-proxy-r74hc                             kube-system
	93572ed1012d6       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           53 seconds ago      Running             kindnet-cni                 1                   18f7cd273abc8       kindnet-tsk2v                                kube-system
	6896022c3f268       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           58 seconds ago      Running             kube-apiserver              1                   af400886bf095       kube-apiserver-no-preload-676905             kube-system
	a7ff1e6390fa7       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           58 seconds ago      Running             etcd                        1                   2c7d639f71524       etcd-no-preload-676905                       kube-system
	d553ba4a25650       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           58 seconds ago      Running             kube-scheduler              1                   3d0a5185c4b70       kube-scheduler-no-preload-676905             kube-system
	e373472937baf       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           58 seconds ago      Running             kube-controller-manager     1                   9de6744774c99       kube-controller-manager-no-preload-676905    kube-system
	
	
	==> coredns [89055935644acadcf02b28ab8d6db16656e01c56a0338f89d963ab2396bdcd1c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:58354 - 36554 "HINFO IN 3253843191460191545.4344913309749210345. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012726782s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-676905
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-676905
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=no-preload-676905
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_47_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:47:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-676905
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:48:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:48:50 +0000   Sat, 10 Jan 2026 02:47:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:48:50 +0000   Sat, 10 Jan 2026 02:47:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:48:50 +0000   Sat, 10 Jan 2026 02:47:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:48:50 +0000   Sat, 10 Jan 2026 02:47:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-676905
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                f6842875-d9d6-4f29-b119-b957541c22e9
	  Boot ID:                    41f52236-76b9-4dc8-bfe3-121fbdeb9659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-7d764666f9-v67dz                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-no-preload-676905                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         113s
	  kube-system                 kindnet-tsk2v                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-no-preload-676905              250m (12%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-676905     200m (10%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-r74hc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-no-preload-676905              100m (5%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-kjnrk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-zvbxj          0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  109s  node-controller  Node no-preload-676905 event: Registered Node no-preload-676905 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node no-preload-676905 event: Registered Node no-preload-676905 in Controller
	
	
	==> dmesg <==
	[ +27.765975] overlayfs: idmapped layers are currently not supported
	[Jan10 02:15] overlayfs: idmapped layers are currently not supported
	[Jan10 02:16] overlayfs: idmapped layers are currently not supported
	[Jan10 02:17] overlayfs: idmapped layers are currently not supported
	[Jan10 02:18] overlayfs: idmapped layers are currently not supported
	[ +23.212409] overlayfs: idmapped layers are currently not supported
	[  +9.866169] overlayfs: idmapped layers are currently not supported
	[Jan10 02:19] overlayfs: idmapped layers are currently not supported
	[Jan10 02:20] overlayfs: idmapped layers are currently not supported
	[ +35.960240] overlayfs: idmapped layers are currently not supported
	[Jan10 02:21] overlayfs: idmapped layers are currently not supported
	[Jan10 02:23] overlayfs: idmapped layers are currently not supported
	[Jan10 02:24] overlayfs: idmapped layers are currently not supported
	[Jan10 02:25] overlayfs: idmapped layers are currently not supported
	[Jan10 02:30] overlayfs: idmapped layers are currently not supported
	[Jan10 02:31] overlayfs: idmapped layers are currently not supported
	[Jan10 02:35] overlayfs: idmapped layers are currently not supported
	[Jan10 02:37] overlayfs: idmapped layers are currently not supported
	[Jan10 02:41] overlayfs: idmapped layers are currently not supported
	[ +37.534021] overlayfs: idmapped layers are currently not supported
	[Jan10 02:43] overlayfs: idmapped layers are currently not supported
	[Jan10 02:44] overlayfs: idmapped layers are currently not supported
	[Jan10 02:45] overlayfs: idmapped layers are currently not supported
	[Jan10 02:46] overlayfs: idmapped layers are currently not supported
	[Jan10 02:48] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a7ff1e6390fa775c123c587323c7e2f6cec3b4b2d846529ec2347776ff26d6f4] <==
	{"level":"info","ts":"2026-01-10T02:48:05.598494Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:48:05.598630Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T02:48:05.599446Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-10T02:48:05.604550Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T02:48:05.604588Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T02:48:05.604025Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:48:05.604626Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:48:06.331855Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:48:06.331982Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:48:06.332061Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:48:06.332108Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:48:06.332154Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:48:06.339842Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:48:06.339948Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:48:06.340003Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:48:06.340038Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:48:06.348013Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-676905 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:48:06.348123Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:48:06.348184Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:48:06.349315Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:48:06.353737Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T02:48:06.379832Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:48:06.379946Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:48:06.381954Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:48:06.395862Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 02:49:03 up  1:31,  0 user,  load average: 2.93, 2.30, 1.99
	Linux no-preload-676905 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [93572ed1012d65280df6c8e67b6f1922bebba09495f4249d9770c01186da9814] <==
	I0110 02:48:09.831710       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:48:09.832062       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 02:48:09.832200       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:48:09.832212       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:48:09.832221       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:48:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:48:10.125723       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:48:10.125744       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:48:10.125753       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:48:10.126070       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0110 02:48:40.224173       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0110 02:48:40.224173       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0110 02:48:40.224305       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0110 02:48:40.224403       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I0110 02:48:41.726452       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:48:41.726483       1 metrics.go:72] Registering metrics
	I0110 02:48:41.726534       1 controller.go:711] "Syncing nftables rules"
	I0110 02:48:50.125227       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:48:50.125267       1 main.go:301] handling current node
	I0110 02:49:00.141909       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 02:49:00.141948       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6896022c3f268165644bf92599f94c980310297bd558b3038a54e557be2f3059] <==
	I0110 02:48:09.118166       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:09.118241       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0110 02:48:09.118577       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 02:48:09.140544       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 02:48:09.149752       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:48:09.156963       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 02:48:09.171961       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:09.171983       1 policy_source.go:248] refreshing policies
	I0110 02:48:09.172107       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 02:48:09.172116       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 02:48:09.186841       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 02:48:09.225499       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E0110 02:48:09.250085       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 02:48:09.290263       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:48:09.649908       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:48:10.173545       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:48:10.221738       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:48:10.249275       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:48:10.258173       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:48:10.334625       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.24.11"}
	I0110 02:48:10.386321       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.65.211"}
	I0110 02:48:12.494031       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:48:12.494146       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:48:12.670750       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:48:12.797494       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [e373472937baf1f075e16e1423aa5e5fbcd79cb428ae5d27b8fc181cd4d93dae] <==
	I0110 02:48:12.103229       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.103232       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.103241       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.103241       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.103252       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.103336       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.106171       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:48:12.113694       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.103713       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.101556       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.103922       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.101581       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.104003       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.104016       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.103531       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.103620       1 range_allocator.go:177] "Sending events to api server"
	I0110 02:48:12.138578       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0110 02:48:12.138627       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:48:12.138684       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.203288       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.203377       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:48:12.203394       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:48:12.206327       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:12.806207       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I0110 02:48:12.806412       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [27193b01d487358eda6303830a14176f18fda2cc8ccb2d5478cc50ca9111e0fa] <==
	I0110 02:48:10.038115       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:48:10.200826       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:48:10.301657       1 shared_informer.go:377] "Caches are synced"
	I0110 02:48:10.301689       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 02:48:10.301770       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:48:10.401323       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:48:10.401380       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:48:10.426645       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:48:10.428276       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:48:10.428305       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:48:10.456650       1 config.go:200] "Starting service config controller"
	I0110 02:48:10.456667       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:48:10.456702       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:48:10.456706       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:48:10.456737       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:48:10.456742       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:48:10.463112       1 config.go:309] "Starting node config controller"
	I0110 02:48:10.463129       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:48:10.463136       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:48:10.557294       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:48:10.557462       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 02:48:10.557479       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d553ba4a2565012123c50e31a0791c4773b9cf73ab9ea054e4d6cac50e536f76] <==
	I0110 02:48:07.648849       1 serving.go:386] Generated self-signed cert in-memory
	W0110 02:48:08.754513       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 02:48:08.754539       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 02:48:08.754549       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 02:48:08.754564       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 02:48:08.939218       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 02:48:08.939248       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:48:08.950742       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:48:08.950772       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:48:08.964013       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 02:48:08.964126       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 02:48:09.151877       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:48:24 no-preload-676905 kubelet[783]: I0110 02:48:24.417974     783 scope.go:122] "RemoveContainer" containerID="86dc8adae2d4753385f851832f0ed29e1d47b0fed43e6b5c393c7d7115cb58a5"
	Jan 10 02:48:24 no-preload-676905 kubelet[783]: E0110 02:48:24.418204     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-kjnrk_kubernetes-dashboard(a045c7cb-7d1d-4de3-bd66-a14b6565b52d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" podUID="a045c7cb-7d1d-4de3-bd66-a14b6565b52d"
	Jan 10 02:48:29 no-preload-676905 kubelet[783]: E0110 02:48:29.644428     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" containerName="dashboard-metrics-scraper"
	Jan 10 02:48:29 no-preload-676905 kubelet[783]: I0110 02:48:29.644468     783 scope.go:122] "RemoveContainer" containerID="86dc8adae2d4753385f851832f0ed29e1d47b0fed43e6b5c393c7d7115cb58a5"
	Jan 10 02:48:29 no-preload-676905 kubelet[783]: E0110 02:48:29.644628     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-kjnrk_kubernetes-dashboard(a045c7cb-7d1d-4de3-bd66-a14b6565b52d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" podUID="a045c7cb-7d1d-4de3-bd66-a14b6565b52d"
	Jan 10 02:48:34 no-preload-676905 kubelet[783]: E0110 02:48:34.297999     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" containerName="dashboard-metrics-scraper"
	Jan 10 02:48:34 no-preload-676905 kubelet[783]: I0110 02:48:34.298495     783 scope.go:122] "RemoveContainer" containerID="86dc8adae2d4753385f851832f0ed29e1d47b0fed43e6b5c393c7d7115cb58a5"
	Jan 10 02:48:34 no-preload-676905 kubelet[783]: E0110 02:48:34.441883     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" containerName="dashboard-metrics-scraper"
	Jan 10 02:48:34 no-preload-676905 kubelet[783]: I0110 02:48:34.441694     783 scope.go:122] "RemoveContainer" containerID="86dc8adae2d4753385f851832f0ed29e1d47b0fed43e6b5c393c7d7115cb58a5"
	Jan 10 02:48:34 no-preload-676905 kubelet[783]: I0110 02:48:34.443761     783 scope.go:122] "RemoveContainer" containerID="54d02af0ab43c67b86a1069282916ed762848b8c601b8efc26fcfbf18c7aa2bc"
	Jan 10 02:48:34 no-preload-676905 kubelet[783]: E0110 02:48:34.444038     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-kjnrk_kubernetes-dashboard(a045c7cb-7d1d-4de3-bd66-a14b6565b52d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" podUID="a045c7cb-7d1d-4de3-bd66-a14b6565b52d"
	Jan 10 02:48:39 no-preload-676905 kubelet[783]: E0110 02:48:39.644199     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" containerName="dashboard-metrics-scraper"
	Jan 10 02:48:39 no-preload-676905 kubelet[783]: I0110 02:48:39.644263     783 scope.go:122] "RemoveContainer" containerID="54d02af0ab43c67b86a1069282916ed762848b8c601b8efc26fcfbf18c7aa2bc"
	Jan 10 02:48:39 no-preload-676905 kubelet[783]: E0110 02:48:39.644469     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-kjnrk_kubernetes-dashboard(a045c7cb-7d1d-4de3-bd66-a14b6565b52d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" podUID="a045c7cb-7d1d-4de3-bd66-a14b6565b52d"
	Jan 10 02:48:40 no-preload-676905 kubelet[783]: I0110 02:48:40.457502     783 scope.go:122] "RemoveContainer" containerID="068ee48cb5d42288b992f6422f0f6edf35a1579daa69d7eb3ffb9e08f9182b2a"
	Jan 10 02:48:42 no-preload-676905 kubelet[783]: E0110 02:48:42.662375     783 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-v67dz" containerName="coredns"
	Jan 10 02:48:56 no-preload-676905 kubelet[783]: E0110 02:48:56.298264     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" containerName="dashboard-metrics-scraper"
	Jan 10 02:48:56 no-preload-676905 kubelet[783]: I0110 02:48:56.298310     783 scope.go:122] "RemoveContainer" containerID="54d02af0ab43c67b86a1069282916ed762848b8c601b8efc26fcfbf18c7aa2bc"
	Jan 10 02:48:56 no-preload-676905 kubelet[783]: I0110 02:48:56.497605     783 scope.go:122] "RemoveContainer" containerID="54d02af0ab43c67b86a1069282916ed762848b8c601b8efc26fcfbf18c7aa2bc"
	Jan 10 02:48:56 no-preload-676905 kubelet[783]: E0110 02:48:56.498231     783 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" containerName="dashboard-metrics-scraper"
	Jan 10 02:48:56 no-preload-676905 kubelet[783]: I0110 02:48:56.498257     783 scope.go:122] "RemoveContainer" containerID="9ca2cd8a5c9b3072491c22f97f3f70d330eac60ee6c190558e871b3e7c11e63b"
	Jan 10 02:48:56 no-preload-676905 kubelet[783]: E0110 02:48:56.498777     783 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-kjnrk_kubernetes-dashboard(a045c7cb-7d1d-4de3-bd66-a14b6565b52d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-kjnrk" podUID="a045c7cb-7d1d-4de3-bd66-a14b6565b52d"
	Jan 10 02:48:56 no-preload-676905 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:48:56 no-preload-676905 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:48:56 no-preload-676905 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [1f6b2e2b36205985d05a8d8ccaa07bf33db63d212ad795c058c0b9fffacd012a] <==
	2026/01/10 02:48:17 Using namespace: kubernetes-dashboard
	2026/01/10 02:48:17 Using in-cluster config to connect to apiserver
	2026/01/10 02:48:17 Using secret token for csrf signing
	2026/01/10 02:48:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 02:48:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 02:48:17 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 02:48:17 Generating JWE encryption key
	2026/01/10 02:48:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 02:48:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 02:48:18 Initializing JWE encryption key from synchronized object
	2026/01/10 02:48:18 Creating in-cluster Sidecar client
	2026/01/10 02:48:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:48:18 Serving insecurely on HTTP port: 9090
	2026/01/10 02:48:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:48:17 Starting overwatch
	
	
	==> storage-provisioner [068ee48cb5d42288b992f6422f0f6edf35a1579daa69d7eb3ffb9e08f9182b2a] <==
	I0110 02:48:09.982348       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 02:48:39.985512       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [92b43250d4836d6795421decd73ba4fd8ee1fd121d721eb0db2e6514bbd285c8] <==
	I0110 02:48:40.515001       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:48:40.530338       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:48:40.530505       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 02:48:40.532806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:48:43.988135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:48:48.248546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:48:51.846753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:48:54.900652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:48:57.923264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:48:57.927770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:48:57.928271       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:48:57.928470       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-676905_e1aa6446-9bbb-466d-8423-2bfa866cdf9a!
	I0110 02:48:57.929426       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d3b5d431-8073-4c52-ab3f-9b40b241d7ee", APIVersion:"v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-676905_e1aa6446-9bbb-466d-8423-2bfa866cdf9a became leader
	W0110 02:48:57.937364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:48:57.961405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:48:58.029142       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-676905_e1aa6446-9bbb-466d-8423-2bfa866cdf9a!
	W0110 02:48:59.967456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:48:59.973677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:49:01.976925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:49:01.982677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:49:03.986487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:49:03.993249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-676905 -n no-preload-676905
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-676905 -n no-preload-676905: exit status 2 (422.783913ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-676905 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (8.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-733680 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-733680 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (310.390318ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:49:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-733680 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-733680
helpers_test.go:244: (dbg) docker inspect newest-cni-733680:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb",
	        "Created": "2026-01-10T02:49:13.990665872Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 224179,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:49:14.047005782Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb/hostname",
	        "HostsPath": "/var/lib/docker/containers/332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb/hosts",
	        "LogPath": "/var/lib/docker/containers/332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb/332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb-json.log",
	        "Name": "/newest-cni-733680",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-733680:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-733680",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb",
	                "LowerDir": "/var/lib/docker/overlay2/84cf567e665ab8640c65dbd3cc3a50ebdb994b7629de1cf36f4ce8e5f0c36014-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/84cf567e665ab8640c65dbd3cc3a50ebdb994b7629de1cf36f4ce8e5f0c36014/merged",
	                "UpperDir": "/var/lib/docker/overlay2/84cf567e665ab8640c65dbd3cc3a50ebdb994b7629de1cf36f4ce8e5f0c36014/diff",
	                "WorkDir": "/var/lib/docker/overlay2/84cf567e665ab8640c65dbd3cc3a50ebdb994b7629de1cf36f4ce8e5f0c36014/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-733680",
	                "Source": "/var/lib/docker/volumes/newest-cni-733680/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-733680",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-733680",
	                "name.minikube.sigs.k8s.io": "newest-cni-733680",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd26ca3d1f22e8802106d5d30be8bb4bfa7cb63b0c2f9d7caad1f83597b50f14",
	            "SandboxKey": "/var/run/docker/netns/cd26ca3d1f22",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-733680": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:91:de:c1:1a:14",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c13b302fa5016d01187ac7a2edef31e75f6720c21560b52b4739e7f7514c4136",
	                    "EndpointID": "8f8ccf48d3d5f4687f403a9d6ed7b1cdc29050a57f3bbea76352700f85ab87f9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-733680",
	                        "332f4ab8cb32"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-733680 -n newest-cni-733680
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-733680 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-733680 logs -n 25: (1.226976258s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-736081                                                                                                                                                                                                                     │ old-k8s-version-736081       │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ start   │ -p embed-certs-290628 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:44 UTC │ 10 Jan 26 02:44 UTC │
	│ addons  │ enable metrics-server -p embed-certs-290628 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │                     │
	│ stop    │ -p embed-certs-290628 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:45 UTC │
	│ addons  │ enable dashboard -p embed-certs-290628 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:45 UTC │
	│ start   │ -p embed-certs-290628 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:45 UTC │ 10 Jan 26 02:46 UTC │
	│ image   │ embed-certs-290628 image list --format=json                                                                                                                                                                                                   │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ pause   │ -p embed-certs-290628 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │                     │
	│ delete  │ -p embed-certs-290628                                                                                                                                                                                                                         │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ delete  │ -p embed-certs-290628                                                                                                                                                                                                                         │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ delete  │ -p disable-driver-mounts-990753                                                                                                                                                                                                               │ disable-driver-mounts-990753 │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ start   │ -p no-preload-676905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:47 UTC │
	│ addons  │ enable metrics-server -p no-preload-676905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │                     │
	│ stop    │ -p no-preload-676905 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:47 UTC │
	│ addons  │ enable dashboard -p no-preload-676905 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:47 UTC │
	│ start   │ -p no-preload-676905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:48 UTC │
	│ image   │ no-preload-676905 image list --format=json                                                                                                                                                                                                    │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │ 10 Jan 26 02:48 UTC │
	│ pause   │ -p no-preload-676905 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │                     │
	│ ssh     │ force-systemd-flag-038359 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-038359    │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │ 10 Jan 26 02:48 UTC │
	│ delete  │ -p force-systemd-flag-038359                                                                                                                                                                                                                  │ force-systemd-flag-038359    │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │ 10 Jan 26 02:49 UTC │
	│ start   │ -p default-k8s-diff-port-403885 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-403885 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │                     │
	│ delete  │ -p no-preload-676905                                                                                                                                                                                                                          │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ delete  │ -p no-preload-676905                                                                                                                                                                                                                          │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ start   │ -p newest-cni-733680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ addons  │ enable metrics-server -p newest-cni-733680 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:49:09
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:49:09.155292  223339 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:49:09.155417  223339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:49:09.155430  223339 out.go:374] Setting ErrFile to fd 2...
	I0110 02:49:09.155436  223339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:49:09.155709  223339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:49:09.156224  223339 out.go:368] Setting JSON to false
	I0110 02:49:09.157147  223339 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5499,"bootTime":1768007851,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:49:09.157231  223339 start.go:143] virtualization:  
	I0110 02:49:09.162797  223339 out.go:179] * [newest-cni-733680] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:49:09.166000  223339 notify.go:221] Checking for updates...
	I0110 02:49:09.166632  223339 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:49:09.172141  223339 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:49:09.175421  223339 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:49:09.178464  223339 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:49:09.181384  223339 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:49:09.184517  223339 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:49:09.188246  223339 config.go:182] Loaded profile config "default-k8s-diff-port-403885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:49:09.188351  223339 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:49:09.229154  223339 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:49:09.229285  223339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:49:09.311912  223339 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:54 SystemTime:2026-01-10 02:49:09.300638071 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:49:09.312010  223339 docker.go:319] overlay module found
	I0110 02:49:09.315370  223339 out.go:179] * Using the docker driver based on user configuration
	I0110 02:49:09.318284  223339 start.go:309] selected driver: docker
	I0110 02:49:09.318300  223339 start.go:928] validating driver "docker" against <nil>
	I0110 02:49:09.318313  223339 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:49:09.318996  223339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:49:09.388042  223339 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:54 SystemTime:2026-01-10 02:49:09.379065811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:49:09.388211  223339 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W0110 02:49:09.388244  223339 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0110 02:49:09.388465  223339 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:49:09.391251  223339 out.go:179] * Using Docker driver with root privileges
	I0110 02:49:09.394013  223339 cni.go:84] Creating CNI manager for ""
	I0110 02:49:09.394078  223339 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:49:09.394092  223339 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 02:49:09.394178  223339 start.go:353] cluster config:
	{Name:newest-cni-733680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:49:09.398818  223339 out.go:179] * Starting "newest-cni-733680" primary control-plane node in "newest-cni-733680" cluster
	I0110 02:49:09.401748  223339 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:49:09.404635  223339 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:49:09.407445  223339 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:49:09.407487  223339 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 02:49:09.407495  223339 cache.go:65] Caching tarball of preloaded images
	I0110 02:49:09.407570  223339 preload.go:251] Found /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 02:49:09.407580  223339 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:49:09.407694  223339 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/config.json ...
	I0110 02:49:09.407725  223339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/config.json: {Name:mk788b32f92316a88b368412d843740b310a9b72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:09.407941  223339 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:49:09.449778  223339 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:49:09.449799  223339 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:49:09.449819  223339 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:49:09.449848  223339 start.go:360] acquireMachinesLock for newest-cni-733680: {Name:mkffafc06373cf7d630e08f2554eaef3a62ff5c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:49:09.449952  223339 start.go:364] duration metric: took 84.289µs to acquireMachinesLock for "newest-cni-733680"
	I0110 02:49:09.449981  223339 start.go:93] Provisioning new machine with config: &{Name:newest-cni-733680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:49:09.450048  223339 start.go:125] createHost starting for "" (driver="docker")
	I0110 02:49:07.528938  221603 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-403885 --format={{.State.Running}}
	I0110 02:49:07.564138  221603 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-403885 --format={{.State.Status}}
	I0110 02:49:07.591188  221603 cli_runner.go:164] Run: docker exec default-k8s-diff-port-403885 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:49:07.664620  221603 oci.go:144] the created container "default-k8s-diff-port-403885" has a running status.
	I0110 02:49:07.664654  221603 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa...
	I0110 02:49:08.486052  221603 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:49:08.582646  221603 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-403885 --format={{.State.Status}}
	I0110 02:49:08.609637  221603 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:49:08.609659  221603 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-403885 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:49:08.691280  221603 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-403885 --format={{.State.Status}}
	I0110 02:49:08.716167  221603 machine.go:94] provisionDockerMachine start ...
	I0110 02:49:08.716258  221603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:49:08.743164  221603 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:08.743492  221603 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I0110 02:49:08.743509  221603 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:49:08.960164  221603 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-403885
	
	I0110 02:49:08.960188  221603 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-403885"
	I0110 02:49:08.960273  221603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:49:08.992831  221603 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:08.993136  221603 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I0110 02:49:08.993148  221603 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-403885 && echo "default-k8s-diff-port-403885" | sudo tee /etc/hostname
	I0110 02:49:09.193672  221603 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-403885
	
	I0110 02:49:09.193748  221603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:49:09.230786  221603 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:09.231082  221603 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I0110 02:49:09.231100  221603 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-403885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-403885/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-403885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:49:09.400051  221603 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:49:09.400078  221603 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 02:49:09.400108  221603 ubuntu.go:190] setting up certificates
	I0110 02:49:09.400119  221603 provision.go:84] configureAuth start
	I0110 02:49:09.400183  221603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-403885
	I0110 02:49:09.419507  221603 provision.go:143] copyHostCerts
	I0110 02:49:09.419581  221603 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem, removing ...
	I0110 02:49:09.419599  221603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:49:09.419674  221603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 02:49:09.419774  221603 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem, removing ...
	I0110 02:49:09.419784  221603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:49:09.419839  221603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 02:49:09.419907  221603 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem, removing ...
	I0110 02:49:09.419915  221603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:49:09.419941  221603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 02:49:09.419988  221603 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-403885 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-403885 localhost minikube]
	I0110 02:49:09.648339  221603 provision.go:177] copyRemoteCerts
	I0110 02:49:09.648428  221603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:49:09.648488  221603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:49:09.670307  221603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:49:09.778391  221603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:49:09.797122  221603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0110 02:49:09.821229  221603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:49:09.843005  221603 provision.go:87] duration metric: took 442.850454ms to configureAuth
	I0110 02:49:09.843038  221603 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:49:09.843265  221603 config.go:182] Loaded profile config "default-k8s-diff-port-403885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:49:09.843418  221603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:49:09.871146  221603 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:09.871498  221603 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I0110 02:49:09.871521  221603 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:49:10.227645  221603 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:49:10.227691  221603 machine.go:97] duration metric: took 1.511488677s to provisionDockerMachine
	I0110 02:49:10.227729  221603 client.go:176] duration metric: took 7.514671355s to LocalClient.Create
	I0110 02:49:10.227778  221603 start.go:167] duration metric: took 7.51475745s to libmachine.API.Create "default-k8s-diff-port-403885"
	I0110 02:49:10.227832  221603 start.go:293] postStartSetup for "default-k8s-diff-port-403885" (driver="docker")
	I0110 02:49:10.227852  221603 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:49:10.227943  221603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:49:10.228021  221603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:49:10.250419  221603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:49:10.361290  221603 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:49:10.366124  221603 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:49:10.366163  221603 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:49:10.366176  221603 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:49:10.366231  221603 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:49:10.366326  221603 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:49:10.366441  221603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:49:10.376249  221603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:49:10.402442  221603 start.go:296] duration metric: took 174.587252ms for postStartSetup
	I0110 02:49:10.402809  221603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-403885
	I0110 02:49:10.426875  221603 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/config.json ...
	I0110 02:49:10.427148  221603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:49:10.427189  221603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:49:10.465289  221603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:49:10.585855  221603 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:49:10.591235  221603 start.go:128] duration metric: took 7.881804858s to createHost
	I0110 02:49:10.591257  221603 start.go:83] releasing machines lock for "default-k8s-diff-port-403885", held for 7.881924353s
	I0110 02:49:10.591324  221603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-403885
	I0110 02:49:10.610644  221603 ssh_runner.go:195] Run: cat /version.json
	I0110 02:49:10.610703  221603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:49:10.610922  221603 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:49:10.610983  221603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:49:10.627756  221603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:49:10.657447  221603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:49:10.851621  221603 ssh_runner.go:195] Run: systemctl --version
	I0110 02:49:10.858556  221603 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:49:10.905318  221603 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:49:10.910068  221603 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:49:10.910151  221603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:49:10.941634  221603 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 02:49:10.941705  221603 start.go:496] detecting cgroup driver to use...
	I0110 02:49:10.941751  221603 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:49:10.941828  221603 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:49:10.962586  221603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:49:10.983040  221603 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:49:10.983149  221603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:49:11.001854  221603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:49:11.022564  221603 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:49:11.180896  221603 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:49:11.340276  221603 docker.go:234] disabling docker service ...
	I0110 02:49:11.340396  221603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:49:11.384936  221603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:49:11.398560  221603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:49:11.551826  221603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:49:11.701704  221603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:49:11.715834  221603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:49:11.736744  221603 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:49:11.736859  221603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:11.746200  221603 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:49:11.746319  221603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:11.755447  221603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:11.764575  221603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:11.773532  221603 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:49:11.781907  221603 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:11.790901  221603 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:11.804937  221603 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:11.814192  221603 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:49:11.822488  221603 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:49:11.830845  221603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:49:11.973149  221603 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:49:09.453644  223339 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:49:09.453862  223339 start.go:159] libmachine.API.Create for "newest-cni-733680" (driver="docker")
	I0110 02:49:09.453887  223339 client.go:173] LocalClient.Create starting
	I0110 02:49:09.453963  223339 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem
	I0110 02:49:09.453999  223339 main.go:144] libmachine: Decoding PEM data...
	I0110 02:49:09.454016  223339 main.go:144] libmachine: Parsing certificate...
	I0110 02:49:09.454069  223339 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem
	I0110 02:49:09.454088  223339 main.go:144] libmachine: Decoding PEM data...
	I0110 02:49:09.454099  223339 main.go:144] libmachine: Parsing certificate...
	I0110 02:49:09.454497  223339 cli_runner.go:164] Run: docker network inspect newest-cni-733680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:49:09.481321  223339 cli_runner.go:211] docker network inspect newest-cni-733680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:49:09.481394  223339 network_create.go:284] running [docker network inspect newest-cni-733680] to gather additional debugging logs...
	I0110 02:49:09.481424  223339 cli_runner.go:164] Run: docker network inspect newest-cni-733680
	W0110 02:49:09.507125  223339 cli_runner.go:211] docker network inspect newest-cni-733680 returned with exit code 1
	I0110 02:49:09.507156  223339 network_create.go:287] error running [docker network inspect newest-cni-733680]: docker network inspect newest-cni-733680: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-733680 not found
	I0110 02:49:09.507169  223339 network_create.go:289] output of [docker network inspect newest-cni-733680]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-733680 not found
	
	** /stderr **
	I0110 02:49:09.507258  223339 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:49:09.527176  223339 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dba69832168e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:28:cf:b8:5c:6c} reservation:<nil>}
	I0110 02:49:09.527524  223339 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1bcca9209747 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d2:8b:83:a4:43:ae} reservation:<nil>}
	I0110 02:49:09.527885  223339 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-383ddfacc8f8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:26:ff:fb:97:66:af} reservation:<nil>}
	I0110 02:49:09.528264  223339 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019295c0}
	I0110 02:49:09.528280  223339 network_create.go:124] attempt to create docker network newest-cni-733680 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0110 02:49:09.528340  223339 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-733680 newest-cni-733680
	I0110 02:49:09.618751  223339 network_create.go:108] docker network newest-cni-733680 192.168.76.0/24 created
	I0110 02:49:09.618780  223339 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-733680" container
	I0110 02:49:09.618863  223339 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:49:09.643768  223339 cli_runner.go:164] Run: docker volume create newest-cni-733680 --label name.minikube.sigs.k8s.io=newest-cni-733680 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:49:09.670275  223339 oci.go:103] Successfully created a docker volume newest-cni-733680
	I0110 02:49:09.670369  223339 cli_runner.go:164] Run: docker run --rm --name newest-cni-733680-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-733680 --entrypoint /usr/bin/test -v newest-cni-733680:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:49:10.257565  223339 oci.go:107] Successfully prepared a docker volume newest-cni-733680
	I0110 02:49:10.258101  223339 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:49:10.258119  223339 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:49:10.258263  223339 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-733680:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:49:13.877584  223339 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-733680:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.619282138s)
	I0110 02:49:13.877619  223339 kic.go:203] duration metric: took 3.619494284s to extract preloaded images to volume ...
	W0110 02:49:13.877809  223339 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 02:49:13.877928  223339 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:49:13.974327  223339 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-733680 --name newest-cni-733680 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-733680 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-733680 --network newest-cni-733680 --ip 192.168.76.2 --volume newest-cni-733680:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 02:49:14.043742  221603 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.070511649s)
	I0110 02:49:14.043766  221603 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:49:14.044106  221603 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:49:14.050269  221603 start.go:574] Will wait 60s for crictl version
	I0110 02:49:14.050330  221603 ssh_runner.go:195] Run: which crictl
	I0110 02:49:14.058324  221603 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:49:14.101002  221603 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:49:14.101089  221603 ssh_runner.go:195] Run: crio --version
	I0110 02:49:14.157403  221603 ssh_runner.go:195] Run: crio --version
	I0110 02:49:14.194929  221603 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:49:14.197729  221603 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-403885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:49:14.216900  221603 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 02:49:14.221838  221603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:49:14.232787  221603 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-403885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-403885 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:49:14.232909  221603 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:49:14.232962  221603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:49:14.305754  221603 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:49:14.305834  221603 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:49:14.306035  221603 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:49:14.352888  221603 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:49:14.352910  221603 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:49:14.352919  221603 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 crio true true} ...
	I0110 02:49:14.353016  221603 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-403885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-403885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:49:14.353094  221603 ssh_runner.go:195] Run: crio config
	I0110 02:49:14.441751  221603 cni.go:84] Creating CNI manager for ""
	I0110 02:49:14.441771  221603 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:49:14.441787  221603 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:49:14.441810  221603 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-403885 NodeName:default-k8s-diff-port-403885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:49:14.441932  221603 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-403885"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:49:14.441997  221603 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:49:14.457354  221603 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:49:14.457423  221603 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:49:14.471974  221603 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0110 02:49:14.492694  221603 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:49:14.516046  221603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I0110 02:49:14.543754  221603 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:49:14.551855  221603 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:49:14.566006  221603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:49:14.793285  221603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:49:14.945140  221603 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885 for IP: 192.168.85.2
	I0110 02:49:14.945157  221603 certs.go:195] generating shared ca certs ...
	I0110 02:49:14.945172  221603 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:14.945297  221603 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:49:14.945337  221603 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:49:14.945344  221603 certs.go:257] generating profile certs ...
	I0110 02:49:14.945411  221603 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.key
	I0110 02:49:14.945421  221603 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.crt with IP's: []
	I0110 02:49:15.480145  221603 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.crt ...
	I0110 02:49:15.480214  221603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.crt: {Name:mk9eec3843e409d483628e8d35aaec2dfee13520 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:15.480434  221603 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.key ...
	I0110 02:49:15.480472  221603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.key: {Name:mk43466de83aa497a814a33e65c7b55afca902af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:15.480607  221603 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/apiserver.key.f53c6d08
	I0110 02:49:15.480649  221603 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/apiserver.crt.f53c6d08 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0110 02:49:15.886160  221603 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/apiserver.crt.f53c6d08 ...
	I0110 02:49:15.886189  221603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/apiserver.crt.f53c6d08: {Name:mka53329c154574525876158ccdf72b9bf97da2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:15.886356  221603 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/apiserver.key.f53c6d08 ...
	I0110 02:49:15.886364  221603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/apiserver.key.f53c6d08: {Name:mk6ae38cc7c5342c020a34cd2efefb468f08cf60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:15.886434  221603 certs.go:382] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/apiserver.crt.f53c6d08 -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/apiserver.crt
	I0110 02:49:15.886512  221603 certs.go:386] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/apiserver.key.f53c6d08 -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/apiserver.key
	I0110 02:49:15.886564  221603 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/proxy-client.key
	I0110 02:49:15.886578  221603 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/proxy-client.crt with IP's: []
	I0110 02:49:16.202015  221603 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/proxy-client.crt ...
	I0110 02:49:16.202046  221603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/proxy-client.crt: {Name:mk129b2947265ea2dbd8f82f369645e842809a54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:16.202271  221603 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/proxy-client.key ...
	I0110 02:49:16.202287  221603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/proxy-client.key: {Name:mkaf4792b23a3e8ea01344abca5008f18f38d0b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:16.202497  221603 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:49:16.202543  221603 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:49:16.202556  221603 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:49:16.202582  221603 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:49:16.202612  221603 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:49:16.202638  221603 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:49:16.202685  221603 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:49:16.203268  221603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:49:16.221726  221603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:49:16.239838  221603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:49:16.257683  221603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:49:16.275810  221603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0110 02:49:16.295721  221603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:49:16.319198  221603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:49:16.339250  221603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 02:49:16.359442  221603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:49:16.378598  221603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:49:16.395682  221603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:49:16.413389  221603 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:49:16.443338  221603 ssh_runner.go:195] Run: openssl version
	I0110 02:49:16.450881  221603 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:49:16.463218  221603 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:49:16.476316  221603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:49:16.480733  221603 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:49:16.480848  221603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:49:16.526658  221603 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:49:16.534630  221603 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4168.pem /etc/ssl/certs/51391683.0
	I0110 02:49:16.542760  221603 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:49:16.550564  221603 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:49:16.560513  221603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:49:16.564648  221603 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:49:16.564710  221603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:49:16.607926  221603 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:49:16.615264  221603 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41682.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:49:16.622453  221603 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:16.629620  221603 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:49:16.640201  221603 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:16.644390  221603 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:16.644455  221603 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:16.687173  221603 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:49:16.694523  221603 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:49:16.701946  221603 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:49:16.705248  221603 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:49:16.705301  221603 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-403885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-403885 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:49:16.705374  221603 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:49:16.705443  221603 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:49:16.731744  221603 cri.go:96] found id: ""
	I0110 02:49:16.731835  221603 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:49:16.740285  221603 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:49:16.758228  221603 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:49:16.758296  221603 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:49:16.769824  221603 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:49:16.769850  221603 kubeadm.go:158] found existing configuration files:
	
	I0110 02:49:16.769900  221603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0110 02:49:16.780529  221603 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:49:16.780595  221603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:49:16.787767  221603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0110 02:49:16.795091  221603 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:49:16.795156  221603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:49:16.802656  221603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0110 02:49:16.811506  221603 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:49:16.811570  221603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:49:16.818923  221603 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0110 02:49:16.828210  221603 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:49:16.828300  221603 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:49:16.835691  221603 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:49:16.886011  221603 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:49:16.886420  221603 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:49:17.000764  221603 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:49:17.000836  221603 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:49:17.000875  221603 kubeadm.go:319] OS: Linux
	I0110 02:49:17.000922  221603 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:49:17.000971  221603 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:49:17.001019  221603 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:49:17.001077  221603 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:49:17.001125  221603 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:49:17.001172  221603 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:49:17.001214  221603 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:49:17.001260  221603 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:49:17.001305  221603 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:49:17.085342  221603 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:49:17.085457  221603 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:49:17.085561  221603 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:49:17.100559  221603 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:49:17.105955  221603 out.go:252]   - Generating certificates and keys ...
	I0110 02:49:17.106040  221603 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:49:17.106117  221603 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:49:14.285805  223339 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Running}}
	I0110 02:49:14.318138  223339 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:14.348546  223339 cli_runner.go:164] Run: docker exec newest-cni-733680 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:49:14.415715  223339 oci.go:144] the created container "newest-cni-733680" has a running status.
	I0110 02:49:14.415751  223339 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa...
	I0110 02:49:14.919783  223339 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:49:14.972805  223339 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:15.053283  223339 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:49:15.053306  223339 kic_runner.go:114] Args: [docker exec --privileged newest-cni-733680 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:49:15.175126  223339 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:15.210309  223339 machine.go:94] provisionDockerMachine start ...
	I0110 02:49:15.210403  223339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:15.271307  223339 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:15.271634  223339 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I0110 02:49:15.271642  223339 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:49:15.567378  223339 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-733680
	
	I0110 02:49:15.567399  223339 ubuntu.go:182] provisioning hostname "newest-cni-733680"
	I0110 02:49:15.567465  223339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:15.594996  223339 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:15.595303  223339 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I0110 02:49:15.595318  223339 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-733680 && echo "newest-cni-733680" | sudo tee /etc/hostname
	I0110 02:49:15.829196  223339 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-733680
	
	I0110 02:49:15.829291  223339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:15.874342  223339 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:15.874646  223339 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I0110 02:49:15.874669  223339 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-733680' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-733680/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-733680' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:49:16.033410  223339 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:49:16.033436  223339 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 02:49:16.033454  223339 ubuntu.go:190] setting up certificates
	I0110 02:49:16.033465  223339 provision.go:84] configureAuth start
	I0110 02:49:16.033524  223339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-733680
	I0110 02:49:16.057368  223339 provision.go:143] copyHostCerts
	I0110 02:49:16.057436  223339 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem, removing ...
	I0110 02:49:16.057445  223339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:49:16.057522  223339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 02:49:16.057610  223339 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem, removing ...
	I0110 02:49:16.057615  223339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:49:16.057640  223339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 02:49:16.057692  223339 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem, removing ...
	I0110 02:49:16.057697  223339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:49:16.057719  223339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 02:49:16.057765  223339 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.newest-cni-733680 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-733680]
	I0110 02:49:16.159042  223339 provision.go:177] copyRemoteCerts
	I0110 02:49:16.159132  223339 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:49:16.159178  223339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:16.178494  223339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:16.285237  223339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:49:16.305254  223339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 02:49:16.325126  223339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 02:49:16.346108  223339 provision.go:87] duration metric: took 312.622635ms to configureAuth
	I0110 02:49:16.346132  223339 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:49:16.346336  223339 config.go:182] Loaded profile config "newest-cni-733680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:49:16.346451  223339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:16.366994  223339 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:16.367292  223339 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I0110 02:49:16.367308  223339 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:49:16.746489  223339 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:49:16.746583  223339 machine.go:97] duration metric: took 1.536254614s to provisionDockerMachine
	I0110 02:49:16.746608  223339 client.go:176] duration metric: took 7.292713903s to LocalClient.Create
	I0110 02:49:16.746660  223339 start.go:167] duration metric: took 7.292798283s to libmachine.API.Create "newest-cni-733680"
	I0110 02:49:16.746684  223339 start.go:293] postStartSetup for "newest-cni-733680" (driver="docker")
	I0110 02:49:16.746726  223339 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:49:16.746819  223339 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:49:16.746894  223339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:16.768629  223339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:16.884814  223339 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:49:16.888722  223339 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:49:16.888750  223339 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:49:16.888761  223339 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:49:16.888810  223339 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:49:16.888940  223339 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:49:16.889040  223339 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:49:16.898995  223339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:49:16.920848  223339 start.go:296] duration metric: took 174.119509ms for postStartSetup
	I0110 02:49:16.921248  223339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-733680
	I0110 02:49:16.942841  223339 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/config.json ...
	I0110 02:49:16.943126  223339 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:49:16.943182  223339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:16.962916  223339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:17.070674  223339 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:49:17.075697  223339 start.go:128] duration metric: took 7.6256351s to createHost
	I0110 02:49:17.075726  223339 start.go:83] releasing machines lock for "newest-cni-733680", held for 7.625760463s
	I0110 02:49:17.075809  223339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-733680
	I0110 02:49:17.095067  223339 ssh_runner.go:195] Run: cat /version.json
	I0110 02:49:17.095114  223339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:17.095656  223339 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:49:17.095729  223339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:17.129513  223339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:17.137204  223339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:17.352998  223339 ssh_runner.go:195] Run: systemctl --version
	I0110 02:49:17.359730  223339 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:49:17.397102  223339 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:49:17.401841  223339 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:49:17.401916  223339 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:49:17.431286  223339 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 02:49:17.431348  223339 start.go:496] detecting cgroup driver to use...
	I0110 02:49:17.431394  223339 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:49:17.431456  223339 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:49:17.453885  223339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:49:17.467789  223339 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:49:17.467938  223339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:49:17.485991  223339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:49:17.505671  223339 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:49:17.650086  223339 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:49:17.801168  223339 docker.go:234] disabling docker service ...
	I0110 02:49:17.801261  223339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:49:17.823901  223339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:49:17.838046  223339 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:49:18.007039  223339 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:49:18.162790  223339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:49:18.176951  223339 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:49:18.190770  223339 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:49:18.190863  223339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:18.199829  223339 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:49:18.199926  223339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:18.208986  223339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:18.217512  223339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:18.227539  223339 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:49:18.235590  223339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:18.244340  223339 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:18.257366  223339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:18.266441  223339 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:49:18.274640  223339 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:49:18.281961  223339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:49:18.414828  223339 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:49:18.595751  223339 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:49:18.595909  223339 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:49:18.600330  223339 start.go:574] Will wait 60s for crictl version
	I0110 02:49:18.600434  223339 ssh_runner.go:195] Run: which crictl
	I0110 02:49:18.604536  223339 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:49:18.639528  223339 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:49:18.639669  223339 ssh_runner.go:195] Run: crio --version
	I0110 02:49:18.673657  223339 ssh_runner.go:195] Run: crio --version
	I0110 02:49:18.711847  223339 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:49:18.714602  223339 cli_runner.go:164] Run: docker network inspect newest-cni-733680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:49:18.757764  223339 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:49:18.762048  223339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:49:18.777389  223339 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0110 02:49:18.780131  223339 kubeadm.go:884] updating cluster {Name:newest-cni-733680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:49:18.780267  223339 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:49:18.780330  223339 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:49:18.841910  223339 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:49:18.841932  223339 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:49:18.841985  223339 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:49:18.871756  223339 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:49:18.871777  223339 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:49:18.871785  223339 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 02:49:18.871909  223339 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-733680 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:49:18.872006  223339 ssh_runner.go:195] Run: crio config
	I0110 02:49:18.948397  223339 cni.go:84] Creating CNI manager for ""
	I0110 02:49:18.948464  223339 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:49:18.948494  223339 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0110 02:49:18.948540  223339 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-733680 NodeName:newest-cni-733680 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:49:18.948696  223339 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-733680"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:49:18.948794  223339 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:49:18.957388  223339 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:49:18.957510  223339 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:49:18.970015  223339 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 02:49:18.983586  223339 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:49:18.997355  223339 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I0110 02:49:19.011984  223339 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:49:19.015855  223339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:49:19.026887  223339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:49:17.427264  221603 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:49:17.485262  221603 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:49:17.609509  221603 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 02:49:17.786870  221603 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 02:49:18.113578  221603 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 02:49:18.114187  221603 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-403885 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 02:49:18.297980  221603 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 02:49:18.298526  221603 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-403885 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0110 02:49:18.393610  221603 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 02:49:18.518511  221603 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 02:49:18.752323  221603 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 02:49:18.752391  221603 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:49:18.850043  221603 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:49:19.043274  221603 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:49:19.135253  221603 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:49:19.405541  221603 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:49:20.088036  221603 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:49:20.089205  221603 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:49:20.092468  221603 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:49:19.161794  223339 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:49:19.181114  223339 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680 for IP: 192.168.76.2
	I0110 02:49:19.181185  223339 certs.go:195] generating shared ca certs ...
	I0110 02:49:19.181216  223339 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:19.181412  223339 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:49:19.181492  223339 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:49:19.181515  223339 certs.go:257] generating profile certs ...
	I0110 02:49:19.181597  223339 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/client.key
	I0110 02:49:19.181647  223339 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/client.crt with IP's: []
	I0110 02:49:19.573364  223339 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/client.crt ...
	I0110 02:49:19.573438  223339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/client.crt: {Name:mkd419709cd60f33f78b35f2115a9383be16dc60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:19.573672  223339 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/client.key ...
	I0110 02:49:19.573705  223339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/client.key: {Name:mka97cab9db2536444a39a69b8296aea8d389a68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:19.573843  223339 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.key.aabe30f3
	I0110 02:49:19.573884  223339 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.crt.aabe30f3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 02:49:19.719335  223339 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.crt.aabe30f3 ...
	I0110 02:49:19.719362  223339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.crt.aabe30f3: {Name:mkd8cbc7d48dd43e625c213fa307fed7973b316a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:19.719522  223339 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.key.aabe30f3 ...
	I0110 02:49:19.719530  223339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.key.aabe30f3: {Name:mk2a2448252ca9719083312a669eb15a525c8555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:19.719608  223339 certs.go:382] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.crt.aabe30f3 -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.crt
	I0110 02:49:19.719683  223339 certs.go:386] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.key.aabe30f3 -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.key
	I0110 02:49:19.719741  223339 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/proxy-client.key
	I0110 02:49:19.719754  223339 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/proxy-client.crt with IP's: []
	I0110 02:49:19.955702  223339 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/proxy-client.crt ...
	I0110 02:49:19.955776  223339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/proxy-client.crt: {Name:mkd4ca6150dd2cda10424b2f5f28be18b25059d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:19.955979  223339 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/proxy-client.key ...
	I0110 02:49:19.956022  223339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/proxy-client.key: {Name:mk8e53f5ce75da9a2481acf3039fa282f5dc74cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:19.956244  223339 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:49:19.956326  223339 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:49:19.956352  223339 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:49:19.956430  223339 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:49:19.956492  223339 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:49:19.956539  223339 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:49:19.956614  223339 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:49:19.957232  223339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:49:20.018195  223339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:49:20.038954  223339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:49:20.057006  223339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:49:20.074270  223339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 02:49:20.097596  223339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 02:49:20.118191  223339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:49:20.139333  223339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 02:49:20.159930  223339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:49:20.196886  223339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:49:20.223211  223339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:49:20.241269  223339 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:49:20.253870  223339 ssh_runner.go:195] Run: openssl version
	I0110 02:49:20.260440  223339 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:49:20.267640  223339 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:49:20.275131  223339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:49:20.279311  223339 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:49:20.279428  223339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:49:20.324095  223339 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:49:20.331218  223339 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41682.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:49:20.338150  223339 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:20.345196  223339 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:49:20.352137  223339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:20.356172  223339 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:20.356280  223339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:20.401787  223339 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:49:20.409116  223339 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:49:20.416555  223339 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:49:20.423435  223339 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:49:20.430782  223339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:49:20.434640  223339 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:49:20.434744  223339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:49:20.475686  223339 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:49:20.482641  223339 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4168.pem /etc/ssl/certs/51391683.0
	I0110 02:49:20.489260  223339 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:49:20.493092  223339 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:49:20.493188  223339 kubeadm.go:401] StartCluster: {Name:newest-cni-733680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:49:20.493311  223339 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:49:20.493385  223339 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:49:20.527778  223339 cri.go:96] found id: ""
	I0110 02:49:20.527895  223339 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:49:20.536389  223339 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:49:20.543618  223339 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:49:20.543756  223339 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:49:20.554193  223339 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:49:20.554216  223339 kubeadm.go:158] found existing configuration files:
	
	I0110 02:49:20.554275  223339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:49:20.562294  223339 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:49:20.562358  223339 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:49:20.569350  223339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:49:20.580538  223339 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:49:20.580613  223339 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:49:20.591973  223339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:49:20.600680  223339 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:49:20.600740  223339 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:49:20.610918  223339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:49:20.618483  223339 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:49:20.618597  223339 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:49:20.625601  223339 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:49:20.677134  223339 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:49:20.677632  223339 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:49:20.799298  223339 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:49:20.799417  223339 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:49:20.799489  223339 kubeadm.go:319] OS: Linux
	I0110 02:49:20.799563  223339 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:49:20.799634  223339 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:49:20.799709  223339 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:49:20.799782  223339 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:49:20.799874  223339 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:49:20.799951  223339 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:49:20.800024  223339 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:49:20.800097  223339 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:49:20.800169  223339 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:49:20.880297  223339 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:49:20.880478  223339 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:49:20.880615  223339 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:49:20.892509  223339 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 02:49:20.096688  221603 out.go:252]   - Booting up control plane ...
	I0110 02:49:20.096799  221603 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:49:20.096884  221603 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:49:20.098516  221603 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:49:20.120279  221603 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:49:20.120584  221603 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:49:20.133301  221603 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:49:20.133673  221603 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:49:20.133960  221603 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:49:20.288478  221603 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:49:20.288592  221603 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:49:21.288134  221603 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000775553s
	I0110 02:49:21.288941  221603 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 02:49:21.289242  221603 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I0110 02:49:21.289509  221603 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 02:49:21.290183  221603 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0110 02:49:20.898514  223339 out.go:252]   - Generating certificates and keys ...
	I0110 02:49:20.898668  223339 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:49:20.898768  223339 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:49:21.037690  223339 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:49:21.841742  223339 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:49:22.028482  223339 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 02:49:22.152165  223339 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 02:49:22.399495  223339 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 02:49:22.400140  223339 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-733680] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:49:22.716962  223339 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 02:49:22.717502  223339 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-733680] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:49:23.009231  223339 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 02:49:23.340725  223339 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 02:49:23.696301  223339 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 02:49:23.696807  223339 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:49:23.864757  223339 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:49:23.961852  223339 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:49:24.194320  223339 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:49:24.315989  223339 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:49:24.602970  223339 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:49:24.604049  223339 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:49:24.606985  223339 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 02:49:23.307296  221603 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.016743449s
	I0110 02:49:25.836384  221603 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.545597628s
	I0110 02:49:27.793673  221603 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.50377891s
	I0110 02:49:27.861025  221603 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 02:49:27.878040  221603 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 02:49:27.897581  221603 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 02:49:27.898056  221603 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-403885 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 02:49:27.911202  221603 kubeadm.go:319] [bootstrap-token] Using token: uljo94.erc5ev1rmjo9pe3o
	I0110 02:49:24.610248  223339 out.go:252]   - Booting up control plane ...
	I0110 02:49:24.610344  223339 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:49:24.618427  223339 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:49:24.619416  223339 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:49:24.649810  223339 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:49:24.650124  223339 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:49:24.663142  223339 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:49:24.663430  223339 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:49:24.663644  223339 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:49:24.830404  223339 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:49:24.830525  223339 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:49:25.832148  223339 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00157409s
	I0110 02:49:25.833750  223339 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 02:49:25.833966  223339 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0110 02:49:25.834060  223339 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 02:49:25.834140  223339 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0110 02:49:27.864943  223339 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.029598136s
	I0110 02:49:27.914120  221603 out.go:252]   - Configuring RBAC rules ...
	I0110 02:49:27.914244  221603 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 02:49:27.919862  221603 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 02:49:27.930528  221603 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 02:49:27.942064  221603 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 02:49:27.947252  221603 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 02:49:27.952083  221603 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 02:49:28.201778  221603 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 02:49:28.669989  221603 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 02:49:29.201502  221603 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 02:49:29.202571  221603 kubeadm.go:319] 
	I0110 02:49:29.202646  221603 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 02:49:29.202651  221603 kubeadm.go:319] 
	I0110 02:49:29.202723  221603 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 02:49:29.202727  221603 kubeadm.go:319] 
	I0110 02:49:29.202750  221603 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 02:49:29.202805  221603 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 02:49:29.202852  221603 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 02:49:29.202857  221603 kubeadm.go:319] 
	I0110 02:49:29.202907  221603 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 02:49:29.202911  221603 kubeadm.go:319] 
	I0110 02:49:29.202955  221603 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 02:49:29.202959  221603 kubeadm.go:319] 
	I0110 02:49:29.203007  221603 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 02:49:29.203078  221603 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 02:49:29.203142  221603 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 02:49:29.203145  221603 kubeadm.go:319] 
	I0110 02:49:29.203225  221603 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 02:49:29.203299  221603 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 02:49:29.203303  221603 kubeadm.go:319] 
	I0110 02:49:29.203382  221603 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token uljo94.erc5ev1rmjo9pe3o \
	I0110 02:49:29.203479  221603 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e531c0ad449fbe8263f118458e677222d5af5abf33ad8c77cbc1152855f23f5 \
	I0110 02:49:29.203497  221603 kubeadm.go:319] 	--control-plane 
	I0110 02:49:29.203501  221603 kubeadm.go:319] 
	I0110 02:49:29.203581  221603 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 02:49:29.203584  221603 kubeadm.go:319] 
	I0110 02:49:29.203661  221603 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token uljo94.erc5ev1rmjo9pe3o \
	I0110 02:49:29.203758  221603 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e531c0ad449fbe8263f118458e677222d5af5abf33ad8c77cbc1152855f23f5 
	I0110 02:49:29.208039  221603 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:49:29.208427  221603 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:49:29.208529  221603 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:49:29.208542  221603 cni.go:84] Creating CNI manager for ""
	I0110 02:49:29.208550  221603 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:49:29.211599  221603 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0110 02:49:30.032680  223339 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.197796714s
	I0110 02:49:31.837306  223339 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.002148621s
	I0110 02:49:31.875473  223339 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 02:49:31.898471  223339 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 02:49:31.910489  223339 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 02:49:31.910721  223339 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-733680 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 02:49:31.922801  223339 kubeadm.go:319] [bootstrap-token] Using token: 5iifn1.ecz6bs887w56mh1z
	I0110 02:49:29.214396  221603 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 02:49:29.218778  221603 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 02:49:29.218794  221603 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 02:49:29.235588  221603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 02:49:29.639820  221603 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 02:49:29.639952  221603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:49:29.640061  221603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-403885 minikube.k8s.io/updated_at=2026_01_10T02_49_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510 minikube.k8s.io/name=default-k8s-diff-port-403885 minikube.k8s.io/primary=true
	I0110 02:49:30.235452  221603 ops.go:34] apiserver oom_adj: -16
	I0110 02:49:30.235556  221603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:49:30.736239  221603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:49:31.236250  221603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:49:31.735757  221603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:49:32.235676  221603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:49:31.925738  223339 out.go:252]   - Configuring RBAC rules ...
	I0110 02:49:31.925856  223339 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 02:49:31.932727  223339 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 02:49:31.947409  223339 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 02:49:31.952509  223339 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 02:49:31.956838  223339 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 02:49:31.963399  223339 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 02:49:32.248634  223339 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 02:49:32.696557  223339 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 02:49:33.244169  223339 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 02:49:33.245301  223339 kubeadm.go:319] 
	I0110 02:49:33.245373  223339 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 02:49:33.245378  223339 kubeadm.go:319] 
	I0110 02:49:33.245459  223339 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 02:49:33.245464  223339 kubeadm.go:319] 
	I0110 02:49:33.245489  223339 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 02:49:33.245547  223339 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 02:49:33.245607  223339 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 02:49:33.245611  223339 kubeadm.go:319] 
	I0110 02:49:33.245675  223339 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 02:49:33.245680  223339 kubeadm.go:319] 
	I0110 02:49:33.245727  223339 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 02:49:33.245731  223339 kubeadm.go:319] 
	I0110 02:49:33.245782  223339 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 02:49:33.245857  223339 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 02:49:33.245925  223339 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 02:49:33.245929  223339 kubeadm.go:319] 
	I0110 02:49:33.246014  223339 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 02:49:33.246090  223339 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 02:49:33.246094  223339 kubeadm.go:319] 
	I0110 02:49:33.246185  223339 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5iifn1.ecz6bs887w56mh1z \
	I0110 02:49:33.246288  223339 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e531c0ad449fbe8263f118458e677222d5af5abf33ad8c77cbc1152855f23f5 \
	I0110 02:49:33.246311  223339 kubeadm.go:319] 	--control-plane 
	I0110 02:49:33.246315  223339 kubeadm.go:319] 
	I0110 02:49:33.246406  223339 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 02:49:33.246410  223339 kubeadm.go:319] 
	I0110 02:49:33.246495  223339 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5iifn1.ecz6bs887w56mh1z \
	I0110 02:49:33.246603  223339 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e531c0ad449fbe8263f118458e677222d5af5abf33ad8c77cbc1152855f23f5 
	I0110 02:49:33.249113  223339 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:49:33.249634  223339 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:49:33.249762  223339 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:49:33.249777  223339 cni.go:84] Creating CNI manager for ""
	I0110 02:49:33.249785  223339 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:49:33.255351  223339 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0110 02:49:32.736433  221603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:49:33.235988  221603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:49:33.736028  221603 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:49:34.017008  221603 kubeadm.go:1114] duration metric: took 4.377110672s to wait for elevateKubeSystemPrivileges
	I0110 02:49:34.017039  221603 kubeadm.go:403] duration metric: took 17.311742716s to StartCluster
	I0110 02:49:34.017057  221603 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:34.017118  221603 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:49:34.017752  221603 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:34.017978  221603 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:49:34.018065  221603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 02:49:34.018314  221603 config.go:182] Loaded profile config "default-k8s-diff-port-403885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:49:34.018353  221603 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:49:34.018413  221603 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-403885"
	I0110 02:49:34.018427  221603 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-403885"
	I0110 02:49:34.018448  221603 host.go:66] Checking if "default-k8s-diff-port-403885" exists ...
	I0110 02:49:34.018916  221603 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-403885 --format={{.State.Status}}
	I0110 02:49:34.019486  221603 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-403885"
	I0110 02:49:34.019504  221603 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-403885"
	I0110 02:49:34.019767  221603 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-403885 --format={{.State.Status}}
	I0110 02:49:34.025312  221603 out.go:179] * Verifying Kubernetes components...
	I0110 02:49:34.031783  221603 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:49:34.071621  221603 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-403885"
	I0110 02:49:34.071659  221603 host.go:66] Checking if "default-k8s-diff-port-403885" exists ...
	I0110 02:49:34.072138  221603 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-403885 --format={{.State.Status}}
	I0110 02:49:34.077509  221603 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:49:33.258179  223339 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 02:49:33.262427  223339 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 02:49:33.262446  223339 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 02:49:33.289983  223339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 02:49:33.728421  223339 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 02:49:33.728563  223339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:49:33.728644  223339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-733680 minikube.k8s.io/updated_at=2026_01_10T02_49_33_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510 minikube.k8s.io/name=newest-cni-733680 minikube.k8s.io/primary=true
	I0110 02:49:34.080644  221603 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:49:34.080665  221603 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:49:34.080734  221603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:49:34.109696  221603 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:49:34.109716  221603 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:49:34.109775  221603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:49:34.123889  221603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:49:34.152400  221603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:49:34.552105  221603 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 02:49:34.552275  221603 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:49:34.556052  221603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:49:34.574586  221603 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:49:35.670447  221603 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.118128083s)
	I0110 02:49:35.670526  221603 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.118353793s)
	I0110 02:49:35.670689  221603 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0110 02:49:35.670589  221603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.114479803s)
	I0110 02:49:35.670625  221603 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.095978091s)
	I0110 02:49:35.672842  221603 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-403885" to be "Ready" ...
	I0110 02:49:35.715883  221603 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0110 02:49:35.717023  221603 addons.go:530] duration metric: took 1.698668778s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0110 02:49:36.176268  221603 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-403885" context rescaled to 1 replicas
	I0110 02:49:34.251235  223339 ops.go:34] apiserver oom_adj: -16
	I0110 02:49:34.251349  223339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:49:34.751465  223339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:49:35.252055  223339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:49:35.751618  223339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:49:36.252348  223339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:49:36.751643  223339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:49:37.251986  223339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:49:37.752341  223339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:49:37.881109  223339 kubeadm.go:1114] duration metric: took 4.152605546s to wait for elevateKubeSystemPrivileges
	I0110 02:49:37.881142  223339 kubeadm.go:403] duration metric: took 17.387958094s to StartCluster
	I0110 02:49:37.881161  223339 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:37.881226  223339 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:49:37.882315  223339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:37.882581  223339 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:49:37.882701  223339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 02:49:37.882971  223339 config.go:182] Loaded profile config "newest-cni-733680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:49:37.883012  223339 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:49:37.883077  223339 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-733680"
	I0110 02:49:37.883092  223339 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-733680"
	I0110 02:49:37.883114  223339 host.go:66] Checking if "newest-cni-733680" exists ...
	I0110 02:49:37.883899  223339 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:37.884511  223339 addons.go:70] Setting default-storageclass=true in profile "newest-cni-733680"
	I0110 02:49:37.884544  223339 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-733680"
	I0110 02:49:37.884848  223339 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:37.886493  223339 out.go:179] * Verifying Kubernetes components...
	I0110 02:49:37.889423  223339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:49:37.925305  223339 addons.go:239] Setting addon default-storageclass=true in "newest-cni-733680"
	I0110 02:49:37.925347  223339 host.go:66] Checking if "newest-cni-733680" exists ...
	I0110 02:49:37.931228  223339 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:37.937886  223339 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:49:37.940842  223339 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:49:37.940869  223339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:49:37.940946  223339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:37.971952  223339 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:49:37.971973  223339 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:49:37.972029  223339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:37.988954  223339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:38.023976  223339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:38.263029  223339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:49:38.400878  223339 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:49:38.400999  223339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 02:49:38.404331  223339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:49:39.253014  223339 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0110 02:49:39.255077  223339 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:49:39.255134  223339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:49:39.309272  223339 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0110 02:49:39.312584  223339 addons.go:530] duration metric: took 1.429562779s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0110 02:49:39.317866  223339 api_server.go:72] duration metric: took 1.435251129s to wait for apiserver process to appear ...
	I0110 02:49:39.317892  223339 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:49:39.317926  223339 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:49:39.337752  223339 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:49:39.347203  223339 api_server.go:141] control plane version: v1.35.0
	I0110 02:49:39.347234  223339 api_server.go:131] duration metric: took 29.335886ms to wait for apiserver health ...
	I0110 02:49:39.347244  223339 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:49:39.359082  223339 system_pods.go:59] 9 kube-system pods found
	I0110 02:49:39.359121  223339 system_pods.go:61] "coredns-7d764666f9-5kvwt" [02024e77-f46d-404c-95f9-dbf6715ecbb2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 02:49:39.359131  223339 system_pods.go:61] "coredns-7d764666f9-7djps" [5b40a1e2-6d92-4e33-8a94-19e45bc18937] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 02:49:39.359137  223339 system_pods.go:61] "etcd-newest-cni-733680" [15903ffb-9c75-402b-aaf1-2ea433e993a1] Running
	I0110 02:49:39.359151  223339 system_pods.go:61] "kindnet-bnwfz" [49fa87e1-f1d9-4315-918f-b079caade618] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 02:49:39.359165  223339 system_pods.go:61] "kube-apiserver-newest-cni-733680" [7daa7cb6-3a2f-4d11-aadd-c7aab970ff4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:49:39.359177  223339 system_pods.go:61] "kube-controller-manager-newest-cni-733680" [f5a37a0a-47e7-41e5-a01f-16efb8c43166] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:49:39.359190  223339 system_pods.go:61] "kube-proxy-mnr64" [0f5cc6b8-6364-4711-a838-fb70b057c4ef] Running
	I0110 02:49:39.359196  223339 system_pods.go:61] "kube-scheduler-newest-cni-733680" [d5f0e1ba-9a06-4172-a5ec-140a553a47ff] Running
	I0110 02:49:39.359202  223339 system_pods.go:61] "storage-provisioner" [1a9287f6-2918-4098-8444-2f1c2c4dda71] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 02:49:39.359212  223339 system_pods.go:74] duration metric: took 11.961931ms to wait for pod list to return data ...
	I0110 02:49:39.359232  223339 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:49:39.379177  223339 default_sa.go:45] found service account: "default"
	I0110 02:49:39.379205  223339 default_sa.go:55] duration metric: took 19.966864ms for default service account to be created ...
	I0110 02:49:39.379220  223339 kubeadm.go:587] duration metric: took 1.496609437s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:49:39.379236  223339 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:49:39.395077  223339 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:49:39.395108  223339 node_conditions.go:123] node cpu capacity is 2
	I0110 02:49:39.395123  223339 node_conditions.go:105] duration metric: took 15.880875ms to run NodePressure ...
	I0110 02:49:39.395136  223339 start.go:242] waiting for startup goroutines ...
	I0110 02:49:39.757400  223339 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-733680" context rescaled to 1 replicas
	I0110 02:49:39.757436  223339 start.go:247] waiting for cluster config update ...
	I0110 02:49:39.757448  223339 start.go:256] writing updated cluster config ...
	I0110 02:49:39.757721  223339 ssh_runner.go:195] Run: rm -f paused
	I0110 02:49:39.854393  223339 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 02:49:39.857729  223339 out.go:203] 
	W0110 02:49:39.860854  223339 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 02:49:39.863858  223339 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:49:39.866873  223339 out.go:179] * Done! kubectl is now configured to use "newest-cni-733680" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:49:38 newest-cni-733680 crio[843]: time="2026-01-10T02:49:38.611467682Z" level=info msg="Ran pod sandbox a2689ca4f49407727b0372863d2f21986e13b1474db34d547f68104213b77432 with infra container: kube-system/kindnet-bnwfz/POD" id=bdd27be0-3908-42b8-8561-e5b3f5ff0d19 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:49:38 newest-cni-733680 crio[843]: time="2026-01-10T02:49:38.612081771Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=acf654e5-5482-4441-86f7-d512654972b8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:49:38 newest-cni-733680 crio[843]: time="2026-01-10T02:49:38.613143962Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=d644de39-7148-4dd2-a312-9d2d1a44eae0 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:38 newest-cni-733680 crio[843]: time="2026-01-10T02:49:38.613597103Z" level=info msg="Image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 not found" id=d644de39-7148-4dd2-a312-9d2d1a44eae0 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:38 newest-cni-733680 crio[843]: time="2026-01-10T02:49:38.613815756Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 found" id=d644de39-7148-4dd2-a312-9d2d1a44eae0 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:38 newest-cni-733680 crio[843]: time="2026-01-10T02:49:38.621181558Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=c8095dba-1e32-4551-a142-e04ade054183 name=/runtime.v1.ImageService/PullImage
	Jan 10 02:49:38 newest-cni-733680 crio[843]: time="2026-01-10T02:49:38.629831651Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Jan 10 02:49:38 newest-cni-733680 crio[843]: time="2026-01-10T02:49:38.633264389Z" level=info msg="Ran pod sandbox e078d1072d7b27ec0ee68268c1fb4b9cc237940385ba3fd6ba682ee7e84b0b2a with infra container: kube-system/kube-proxy-mnr64/POD" id=acf654e5-5482-4441-86f7-d512654972b8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:49:38 newest-cni-733680 crio[843]: time="2026-01-10T02:49:38.641441123Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=c1bf1b40-f64b-4ff5-8653-747996d2264d name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:38 newest-cni-733680 crio[843]: time="2026-01-10T02:49:38.646037194Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=ba26a006-63f3-440b-8a33-6c1c6608d29e name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:38 newest-cni-733680 crio[843]: time="2026-01-10T02:49:38.656121809Z" level=info msg="Creating container: kube-system/kube-proxy-mnr64/kube-proxy" id=662faae8-bbc9-41f1-8c95-c044917dbc6c name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:49:38 newest-cni-733680 crio[843]: time="2026-01-10T02:49:38.656236506Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:38 newest-cni-733680 crio[843]: time="2026-01-10T02:49:38.673726388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:38 newest-cni-733680 crio[843]: time="2026-01-10T02:49:38.674369949Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:38 newest-cni-733680 crio[843]: time="2026-01-10T02:49:38.725721481Z" level=info msg="Created container 18fb548794f46abcb642d8948bfb9863afafff26271c2721ca0a32c222d74676: kube-system/kube-proxy-mnr64/kube-proxy" id=662faae8-bbc9-41f1-8c95-c044917dbc6c name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:49:38 newest-cni-733680 crio[843]: time="2026-01-10T02:49:38.727523608Z" level=info msg="Starting container: 18fb548794f46abcb642d8948bfb9863afafff26271c2721ca0a32c222d74676" id=3fb254fe-e715-415c-b073-93d05d683376 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:49:38 newest-cni-733680 crio[843]: time="2026-01-10T02:49:38.736123429Z" level=info msg="Started container" PID=1472 containerID=18fb548794f46abcb642d8948bfb9863afafff26271c2721ca0a32c222d74676 description=kube-system/kube-proxy-mnr64/kube-proxy id=3fb254fe-e715-415c-b073-93d05d683376 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e078d1072d7b27ec0ee68268c1fb4b9cc237940385ba3fd6ba682ee7e84b0b2a
	Jan 10 02:49:41 newest-cni-733680 crio[843]: time="2026-01-10T02:49:41.342552609Z" level=info msg="Pulled image: docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3" id=c8095dba-1e32-4551-a142-e04ade054183 name=/runtime.v1.ImageService/PullImage
	Jan 10 02:49:41 newest-cni-733680 crio[843]: time="2026-01-10T02:49:41.343775001Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=cbdba286-bf1c-40f0-9c1d-4a689f03fc23 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:41 newest-cni-733680 crio[843]: time="2026-01-10T02:49:41.349779223Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=d5c4c890-0b9c-4f86-9a39-a960c5e7759f name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:41 newest-cni-733680 crio[843]: time="2026-01-10T02:49:41.35550662Z" level=info msg="Creating container: kube-system/kindnet-bnwfz/kindnet-cni" id=6499298c-4b7a-4c25-a4c8-63c47b1eeb14 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:49:41 newest-cni-733680 crio[843]: time="2026-01-10T02:49:41.355711127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:41 newest-cni-733680 crio[843]: time="2026-01-10T02:49:41.362634275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:41 newest-cni-733680 crio[843]: time="2026-01-10T02:49:41.363259219Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:41 newest-cni-733680 crio[843]: time="2026-01-10T02:49:41.383302617Z" level=info msg="Created container 5d60028cb178df03611b12edd4e31837a9a55ea90a19a23bc0b8407b22b27c61: kube-system/kindnet-bnwfz/kindnet-cni" id=6499298c-4b7a-4c25-a4c8-63c47b1eeb14 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	5d60028cb178d       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3   Less than a second ago   Running             kindnet-cni               0                   a2689ca4f4940       kindnet-bnwfz                               kube-system
	18fb548794f46       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                     2 seconds ago            Running             kube-proxy                0                   e078d1072d7b2       kube-proxy-mnr64                            kube-system
	dd98211212e56       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                     15 seconds ago           Running             kube-scheduler            0                   fb13a928178f7       kube-scheduler-newest-cni-733680            kube-system
	856a00c8fcf80       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                     15 seconds ago           Running             kube-controller-manager   0                   161bc5be04016       kube-controller-manager-newest-cni-733680   kube-system
	f2a45d84ade6f       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                     15 seconds ago           Running             kube-apiserver            0                   af528b4da57da       kube-apiserver-newest-cni-733680            kube-system
	911d2beefb8fa       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                     15 seconds ago           Running             etcd                      0                   e29dafb986a97       etcd-newest-cni-733680                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-733680
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-733680
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=newest-cni-733680
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_49_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:49:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-733680
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:49:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:49:32 +0000   Sat, 10 Jan 2026 02:49:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:49:32 +0000   Sat, 10 Jan 2026 02:49:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:49:32 +0000   Sat, 10 Jan 2026 02:49:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 10 Jan 2026 02:49:32 +0000   Sat, 10 Jan 2026 02:49:26 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-733680
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                c296da09-1974-40e8-a0f3-9e9b1313e8dc
	  Boot ID:                    41f52236-76b9-4dc8-bfe3-121fbdeb9659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-733680                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-bnwfz                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-733680             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-733680    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-mnr64                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-733680             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  5s    node-controller  Node newest-cni-733680 event: Registered Node newest-cni-733680 in Controller
	
	
	==> dmesg <==
	[Jan10 02:16] overlayfs: idmapped layers are currently not supported
	[Jan10 02:17] overlayfs: idmapped layers are currently not supported
	[Jan10 02:18] overlayfs: idmapped layers are currently not supported
	[ +23.212409] overlayfs: idmapped layers are currently not supported
	[  +9.866169] overlayfs: idmapped layers are currently not supported
	[Jan10 02:19] overlayfs: idmapped layers are currently not supported
	[Jan10 02:20] overlayfs: idmapped layers are currently not supported
	[ +35.960240] overlayfs: idmapped layers are currently not supported
	[Jan10 02:21] overlayfs: idmapped layers are currently not supported
	[Jan10 02:23] overlayfs: idmapped layers are currently not supported
	[Jan10 02:24] overlayfs: idmapped layers are currently not supported
	[Jan10 02:25] overlayfs: idmapped layers are currently not supported
	[Jan10 02:30] overlayfs: idmapped layers are currently not supported
	[Jan10 02:31] overlayfs: idmapped layers are currently not supported
	[Jan10 02:35] overlayfs: idmapped layers are currently not supported
	[Jan10 02:37] overlayfs: idmapped layers are currently not supported
	[Jan10 02:41] overlayfs: idmapped layers are currently not supported
	[ +37.534021] overlayfs: idmapped layers are currently not supported
	[Jan10 02:43] overlayfs: idmapped layers are currently not supported
	[Jan10 02:44] overlayfs: idmapped layers are currently not supported
	[Jan10 02:45] overlayfs: idmapped layers are currently not supported
	[Jan10 02:46] overlayfs: idmapped layers are currently not supported
	[Jan10 02:48] overlayfs: idmapped layers are currently not supported
	[Jan10 02:49] overlayfs: idmapped layers are currently not supported
	[  +4.690964] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [911d2beefb8fae51fa376a375e0e4f6608355688d9f4d2b6155c39643c311a58] <==
	{"level":"info","ts":"2026-01-10T02:49:26.419775Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:49:26.776277Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T02:49:26.776333Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T02:49:26.776368Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2026-01-10T02:49:26.776380Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:49:26.776394Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:49:26.785797Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:49:26.785844Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:49:26.785865Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2026-01-10T02:49:26.785874Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:49:26.788109Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-733680 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:49:26.788321Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:49:26.788514Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:49:26.791575Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:49:26.791740Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:49:26.792004Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:49:26.792065Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:49:26.796436Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:49:26.796989Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:49:26.797119Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:49:26.797187Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:49:26.797264Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T02:49:26.797364Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T02:49:26.798135Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:49:26.817149Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 02:49:41 up  1:32,  0 user,  load average: 4.33, 2.72, 2.14
	Linux newest-cni-733680 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [f2a45d84ade6ffc26580b3e7802ce885116b9b539d5e41935de4b3a9b23d8293] <==
	I0110 02:49:30.096556       1 policy_source.go:248] refreshing policies
	I0110 02:49:30.124048       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 02:49:30.161779       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:49:30.201672       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:49:30.225798       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:49:30.229175       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 02:49:30.270315       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 02:49:30.270641       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:49:30.772003       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 02:49:30.779338       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 02:49:30.779363       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:49:31.567310       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:49:31.632844       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:49:31.710721       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 02:49:31.718450       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0110 02:49:31.719614       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:49:31.731398       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:49:31.987077       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:49:32.670773       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:49:32.695368       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 02:49:32.721322       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 02:49:37.547204       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:49:37.554539       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:49:37.643633       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:49:38.165785       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [856a00c8fcf80defcf605bac04dd1cfea9d8ee247772127941216ee6a706dedd] <==
	I0110 02:49:36.818878       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0110 02:49:36.818934       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:36.818960       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:36.820568       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:36.822998       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:36.824690       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:36.824730       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:36.824794       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:36.826902       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:36.828741       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:36.831034       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:36.832157       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:36.837895       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:36.837938       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:36.839278       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:36.840120       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:36.840571       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:36.840621       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:36.840923       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:36.859087       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-733680" podCIDRs=["10.42.0.0/24"]
	I0110 02:49:36.946950       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:49:37.009159       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:37.009351       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:49:37.009386       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:49:37.047888       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [18fb548794f46abcb642d8948bfb9863afafff26271c2721ca0a32c222d74676] <==
	I0110 02:49:38.851577       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:49:38.974941       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:49:39.076035       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:39.076067       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 02:49:39.076163       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:49:39.118915       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:49:39.118990       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:49:39.124715       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:49:39.125023       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:49:39.125043       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:49:39.128594       1 config.go:200] "Starting service config controller"
	I0110 02:49:39.128612       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:49:39.128673       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:49:39.128678       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:49:39.132262       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:49:39.132280       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:49:39.133094       1 config.go:309] "Starting node config controller"
	I0110 02:49:39.133107       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:49:39.133113       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:49:39.229142       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 02:49:39.232766       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:49:39.232798       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dd98211212e5698a15c830a58e603cc57e7f8b9d5dbccf431a3855e9c785a57c] <==
	E0110 02:49:30.079134       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 02:49:30.079193       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 02:49:30.079250       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 02:49:30.079296       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 02:49:30.079684       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 02:49:30.082078       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 02:49:30.082670       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 02:49:30.083237       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 02:49:30.085532       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 02:49:30.086102       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 02:49:30.079789       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 02:49:30.086261       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 02:49:30.086331       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 02:49:30.938433       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 02:49:30.950176       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 02:49:30.957521       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 02:49:30.992438       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 02:49:31.032137       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 02:49:31.060442       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 02:49:31.185391       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 02:49:31.281199       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E0110 02:49:31.281855       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 02:49:31.292411       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 02:49:31.330424       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	I0110 02:49:33.401777       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:49:34 newest-cni-733680 kubelet[1298]: I0110 02:49:34.174579    1298 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-733680" podStartSLOduration=2.174545602 podStartE2EDuration="2.174545602s" podCreationTimestamp="2026-01-10 02:49:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:49:33.989249176 +0000 UTC m=+1.479940117" watchObservedRunningTime="2026-01-10 02:49:34.174545602 +0000 UTC m=+1.665236535"
	Jan 10 02:49:34 newest-cni-733680 kubelet[1298]: I0110 02:49:34.174927    1298 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-733680" podStartSLOduration=2.174920944 podStartE2EDuration="2.174920944s" podCreationTimestamp="2026-01-10 02:49:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:49:34.174718308 +0000 UTC m=+1.665409249" watchObservedRunningTime="2026-01-10 02:49:34.174920944 +0000 UTC m=+1.665611877"
	Jan 10 02:49:34 newest-cni-733680 kubelet[1298]: I0110 02:49:34.299321    1298 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-733680" podStartSLOduration=2.299304877 podStartE2EDuration="2.299304877s" podCreationTimestamp="2026-01-10 02:49:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:49:34.258036532 +0000 UTC m=+1.748727465" watchObservedRunningTime="2026-01-10 02:49:34.299304877 +0000 UTC m=+1.789995810"
	Jan 10 02:49:34 newest-cni-733680 kubelet[1298]: I0110 02:49:34.347233    1298 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-733680" podStartSLOduration=2.347215623 podStartE2EDuration="2.347215623s" podCreationTimestamp="2026-01-10 02:49:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:49:34.307637161 +0000 UTC m=+1.798328094" watchObservedRunningTime="2026-01-10 02:49:34.347215623 +0000 UTC m=+1.837906572"
	Jan 10 02:49:34 newest-cni-733680 kubelet[1298]: E0110 02:49:34.824782    1298 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-733680" containerName="kube-controller-manager"
	Jan 10 02:49:34 newest-cni-733680 kubelet[1298]: E0110 02:49:34.825225    1298 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-733680" containerName="kube-apiserver"
	Jan 10 02:49:34 newest-cni-733680 kubelet[1298]: E0110 02:49:34.825468    1298 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-733680" containerName="kube-scheduler"
	Jan 10 02:49:34 newest-cni-733680 kubelet[1298]: E0110 02:49:34.825705    1298 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-733680" containerName="etcd"
	Jan 10 02:49:35 newest-cni-733680 kubelet[1298]: E0110 02:49:35.825574    1298 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-733680" containerName="kube-scheduler"
	Jan 10 02:49:36 newest-cni-733680 kubelet[1298]: I0110 02:49:36.918847    1298 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Jan 10 02:49:36 newest-cni-733680 kubelet[1298]: I0110 02:49:36.920833    1298 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Jan 10 02:49:38 newest-cni-733680 kubelet[1298]: I0110 02:49:38.360158    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/49fa87e1-f1d9-4315-918f-b079caade618-cni-cfg\") pod \"kindnet-bnwfz\" (UID: \"49fa87e1-f1d9-4315-918f-b079caade618\") " pod="kube-system/kindnet-bnwfz"
	Jan 10 02:49:38 newest-cni-733680 kubelet[1298]: I0110 02:49:38.360205    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49fa87e1-f1d9-4315-918f-b079caade618-lib-modules\") pod \"kindnet-bnwfz\" (UID: \"49fa87e1-f1d9-4315-918f-b079caade618\") " pod="kube-system/kindnet-bnwfz"
	Jan 10 02:49:38 newest-cni-733680 kubelet[1298]: I0110 02:49:38.360227    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0f5cc6b8-6364-4711-a838-fb70b057c4ef-kube-proxy\") pod \"kube-proxy-mnr64\" (UID: \"0f5cc6b8-6364-4711-a838-fb70b057c4ef\") " pod="kube-system/kube-proxy-mnr64"
	Jan 10 02:49:38 newest-cni-733680 kubelet[1298]: I0110 02:49:38.360256    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f5cc6b8-6364-4711-a838-fb70b057c4ef-xtables-lock\") pod \"kube-proxy-mnr64\" (UID: \"0f5cc6b8-6364-4711-a838-fb70b057c4ef\") " pod="kube-system/kube-proxy-mnr64"
	Jan 10 02:49:38 newest-cni-733680 kubelet[1298]: I0110 02:49:38.360274    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f5cc6b8-6364-4711-a838-fb70b057c4ef-lib-modules\") pod \"kube-proxy-mnr64\" (UID: \"0f5cc6b8-6364-4711-a838-fb70b057c4ef\") " pod="kube-system/kube-proxy-mnr64"
	Jan 10 02:49:38 newest-cni-733680 kubelet[1298]: I0110 02:49:38.360294    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmp8r\" (UniqueName: \"kubernetes.io/projected/49fa87e1-f1d9-4315-918f-b079caade618-kube-api-access-jmp8r\") pod \"kindnet-bnwfz\" (UID: \"49fa87e1-f1d9-4315-918f-b079caade618\") " pod="kube-system/kindnet-bnwfz"
	Jan 10 02:49:38 newest-cni-733680 kubelet[1298]: I0110 02:49:38.360316    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49fa87e1-f1d9-4315-918f-b079caade618-xtables-lock\") pod \"kindnet-bnwfz\" (UID: \"49fa87e1-f1d9-4315-918f-b079caade618\") " pod="kube-system/kindnet-bnwfz"
	Jan 10 02:49:38 newest-cni-733680 kubelet[1298]: I0110 02:49:38.360333    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s4bf\" (UniqueName: \"kubernetes.io/projected/0f5cc6b8-6364-4711-a838-fb70b057c4ef-kube-api-access-7s4bf\") pod \"kube-proxy-mnr64\" (UID: \"0f5cc6b8-6364-4711-a838-fb70b057c4ef\") " pod="kube-system/kube-proxy-mnr64"
	Jan 10 02:49:38 newest-cni-733680 kubelet[1298]: I0110 02:49:38.484454    1298 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jan 10 02:49:38 newest-cni-733680 kubelet[1298]: W0110 02:49:38.609108    1298 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb/crio-a2689ca4f49407727b0372863d2f21986e13b1474db34d547f68104213b77432 WatchSource:0}: Error finding container a2689ca4f49407727b0372863d2f21986e13b1474db34d547f68104213b77432: Status 404 returned error can't find the container with id a2689ca4f49407727b0372863d2f21986e13b1474db34d547f68104213b77432
	Jan 10 02:49:39 newest-cni-733680 kubelet[1298]: E0110 02:49:39.271597    1298 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-733680" containerName="kube-apiserver"
	Jan 10 02:49:39 newest-cni-733680 kubelet[1298]: I0110 02:49:39.322584    1298 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-mnr64" podStartSLOduration=1.322565437 podStartE2EDuration="1.322565437s" podCreationTimestamp="2026-01-10 02:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:49:38.860295291 +0000 UTC m=+6.350986249" watchObservedRunningTime="2026-01-10 02:49:39.322565437 +0000 UTC m=+6.813256395"
	Jan 10 02:49:40 newest-cni-733680 kubelet[1298]: E0110 02:49:40.480364    1298 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-733680" containerName="etcd"
	Jan 10 02:49:41 newest-cni-733680 kubelet[1298]: I0110 02:49:41.858032    1298 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-bnwfz" podStartSLOduration=1.127522276 podStartE2EDuration="3.858015505s" podCreationTimestamp="2026-01-10 02:49:38 +0000 UTC" firstStartedPulling="2026-01-10 02:49:38.614428285 +0000 UTC m=+6.105119218" lastFinishedPulling="2026-01-10 02:49:41.344921514 +0000 UTC m=+8.835612447" observedRunningTime="2026-01-10 02:49:41.857908726 +0000 UTC m=+9.348599659" watchObservedRunningTime="2026-01-10 02:49:41.858015505 +0000 UTC m=+9.348706446"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-733680 -n newest-cni-733680
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-733680 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-7djps storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-733680 describe pod coredns-7d764666f9-7djps storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-733680 describe pod coredns-7d764666f9-7djps storage-provisioner: exit status 1 (79.777511ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-7djps" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-733680 describe pod coredns-7d764666f9-7djps storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-733680 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-733680 --alsologtostderr -v=1: exit status 80 (2.086673393s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-733680 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:49:58.480290  229621 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:49:58.480475  229621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:49:58.480505  229621 out.go:374] Setting ErrFile to fd 2...
	I0110 02:49:58.480524  229621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:49:58.480883  229621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:49:58.481212  229621 out.go:368] Setting JSON to false
	I0110 02:49:58.481273  229621 mustload.go:66] Loading cluster: newest-cni-733680
	I0110 02:49:58.481991  229621 config.go:182] Loaded profile config "newest-cni-733680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:49:58.482834  229621 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:58.499944  229621 host.go:66] Checking if "newest-cni-733680" exists ...
	I0110 02:49:58.500252  229621 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:49:58.560606  229621 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2026-01-10 02:49:58.55070193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:49:58.561227  229621 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22414/minikube-v1.37.0-1767924026-22414-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767924026-22414/minikube-v1.37.0-1767924026-22414-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767924026-22414-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:newest-cni-733680 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 02:49:58.564760  229621 out.go:179] * Pausing node newest-cni-733680 ... 
	I0110 02:49:58.568559  229621 host.go:66] Checking if "newest-cni-733680" exists ...
	I0110 02:49:58.568892  229621 ssh_runner.go:195] Run: systemctl --version
	I0110 02:49:58.568938  229621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:58.586120  229621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:58.690296  229621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:49:58.702176  229621 pause.go:52] kubelet running: true
	I0110 02:49:58.702277  229621 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:49:58.927131  229621 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:49:58.927276  229621 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:49:59.063373  229621 cri.go:96] found id: "d50472bf13056b006d7be9a47a3b5f1710e956b40dc97490122f62498e3e741b"
	I0110 02:49:59.063458  229621 cri.go:96] found id: "f3115642cb9635a0a0fb487d418fee931ed516febc3248ca4c87c80d330b4a87"
	I0110 02:49:59.063478  229621 cri.go:96] found id: "ea5078ffbb2ab2a6bbaa2c34fad84cb3a86c77b7c22d703c398c8f81af573168"
	I0110 02:49:59.063506  229621 cri.go:96] found id: "564644b28306f6bcee019ff3074f2c75514833f9bf107fbb9bc76939c4b892b7"
	I0110 02:49:59.063549  229621 cri.go:96] found id: "9ede343cb2e278e170c60426ec5533a577c1bde2bb263e32f4b5e061784803c7"
	I0110 02:49:59.063581  229621 cri.go:96] found id: "b2affd0966bb9dbc433519da5f2d62205ba805d51c020dd49252bd59427932dc"
	I0110 02:49:59.063610  229621 cri.go:96] found id: ""
	I0110 02:49:59.063693  229621 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:49:59.080755  229621 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:49:59Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:49:59.344283  229621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:49:59.362428  229621 pause.go:52] kubelet running: false
	I0110 02:49:59.362545  229621 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:49:59.529173  229621 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:49:59.529300  229621 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:49:59.603113  229621 cri.go:96] found id: "d50472bf13056b006d7be9a47a3b5f1710e956b40dc97490122f62498e3e741b"
	I0110 02:49:59.603188  229621 cri.go:96] found id: "f3115642cb9635a0a0fb487d418fee931ed516febc3248ca4c87c80d330b4a87"
	I0110 02:49:59.603226  229621 cri.go:96] found id: "ea5078ffbb2ab2a6bbaa2c34fad84cb3a86c77b7c22d703c398c8f81af573168"
	I0110 02:49:59.603256  229621 cri.go:96] found id: "564644b28306f6bcee019ff3074f2c75514833f9bf107fbb9bc76939c4b892b7"
	I0110 02:49:59.603280  229621 cri.go:96] found id: "9ede343cb2e278e170c60426ec5533a577c1bde2bb263e32f4b5e061784803c7"
	I0110 02:49:59.603301  229621 cri.go:96] found id: "b2affd0966bb9dbc433519da5f2d62205ba805d51c020dd49252bd59427932dc"
	I0110 02:49:59.603321  229621 cri.go:96] found id: ""
	I0110 02:49:59.603400  229621 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:49:59.915107  229621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:49:59.928693  229621 pause.go:52] kubelet running: false
	I0110 02:49:59.928808  229621 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:50:00.264639  229621 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:50:00.264734  229621 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:50:00.473941  229621 cri.go:96] found id: "d50472bf13056b006d7be9a47a3b5f1710e956b40dc97490122f62498e3e741b"
	I0110 02:50:00.473968  229621 cri.go:96] found id: "f3115642cb9635a0a0fb487d418fee931ed516febc3248ca4c87c80d330b4a87"
	I0110 02:50:00.473976  229621 cri.go:96] found id: "ea5078ffbb2ab2a6bbaa2c34fad84cb3a86c77b7c22d703c398c8f81af573168"
	I0110 02:50:00.473980  229621 cri.go:96] found id: "564644b28306f6bcee019ff3074f2c75514833f9bf107fbb9bc76939c4b892b7"
	I0110 02:50:00.473984  229621 cri.go:96] found id: "9ede343cb2e278e170c60426ec5533a577c1bde2bb263e32f4b5e061784803c7"
	I0110 02:50:00.473989  229621 cri.go:96] found id: "b2affd0966bb9dbc433519da5f2d62205ba805d51c020dd49252bd59427932dc"
	I0110 02:50:00.473992  229621 cri.go:96] found id: ""
	I0110 02:50:00.474057  229621 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:50:00.496116  229621 out.go:203] 
	W0110 02:50:00.499018  229621 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:50:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:50:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 02:50:00.499044  229621 out.go:285] * 
	* 
	W0110 02:50:00.502637  229621 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:50:00.506431  229621 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-733680 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-733680
helpers_test.go:244: (dbg) docker inspect newest-cni-733680:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb",
	        "Created": "2026-01-10T02:49:13.990665872Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 227850,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:49:44.598785864Z",
	            "FinishedAt": "2026-01-10T02:49:43.793795697Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb/hostname",
	        "HostsPath": "/var/lib/docker/containers/332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb/hosts",
	        "LogPath": "/var/lib/docker/containers/332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb/332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb-json.log",
	        "Name": "/newest-cni-733680",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-733680:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-733680",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb",
	                "LowerDir": "/var/lib/docker/overlay2/84cf567e665ab8640c65dbd3cc3a50ebdb994b7629de1cf36f4ce8e5f0c36014-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/84cf567e665ab8640c65dbd3cc3a50ebdb994b7629de1cf36f4ce8e5f0c36014/merged",
	                "UpperDir": "/var/lib/docker/overlay2/84cf567e665ab8640c65dbd3cc3a50ebdb994b7629de1cf36f4ce8e5f0c36014/diff",
	                "WorkDir": "/var/lib/docker/overlay2/84cf567e665ab8640c65dbd3cc3a50ebdb994b7629de1cf36f4ce8e5f0c36014/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-733680",
	                "Source": "/var/lib/docker/volumes/newest-cni-733680/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-733680",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-733680",
	                "name.minikube.sigs.k8s.io": "newest-cni-733680",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d6bbe9c8a2a75107074b9ca17c9c433e43c618ae4b8f1048e2645d94dc5161ab",
	            "SandboxKey": "/var/run/docker/netns/d6bbe9c8a2a7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-733680": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:29:39:1a:e1:62",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c13b302fa5016d01187ac7a2edef31e75f6720c21560b52b4739e7f7514c4136",
	                    "EndpointID": "7dcd9a50e728c6dcac304e464dc877f6a39d5452440b0d27ca43ff51aa7d853c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-733680",
	                        "332f4ab8cb32"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-733680 -n newest-cni-733680
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-733680 -n newest-cni-733680: exit status 2 (454.958624ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-733680 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-733680 logs -n 25: (1.505313677s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ embed-certs-290628 image list --format=json                                                                                                                                                                                                   │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ pause   │ -p embed-certs-290628 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │                     │
	│ delete  │ -p embed-certs-290628                                                                                                                                                                                                                         │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ delete  │ -p embed-certs-290628                                                                                                                                                                                                                         │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ delete  │ -p disable-driver-mounts-990753                                                                                                                                                                                                               │ disable-driver-mounts-990753 │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ start   │ -p no-preload-676905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:47 UTC │
	│ addons  │ enable metrics-server -p no-preload-676905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │                     │
	│ stop    │ -p no-preload-676905 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:47 UTC │
	│ addons  │ enable dashboard -p no-preload-676905 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:47 UTC │
	│ start   │ -p no-preload-676905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:48 UTC │
	│ image   │ no-preload-676905 image list --format=json                                                                                                                                                                                                    │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │ 10 Jan 26 02:48 UTC │
	│ pause   │ -p no-preload-676905 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │                     │
	│ ssh     │ force-systemd-flag-038359 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-038359    │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │ 10 Jan 26 02:48 UTC │
	│ delete  │ -p force-systemd-flag-038359                                                                                                                                                                                                                  │ force-systemd-flag-038359    │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │ 10 Jan 26 02:49 UTC │
	│ start   │ -p default-k8s-diff-port-403885 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-403885 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ delete  │ -p no-preload-676905                                                                                                                                                                                                                          │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ delete  │ -p no-preload-676905                                                                                                                                                                                                                          │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ start   │ -p newest-cni-733680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ addons  │ enable metrics-server -p newest-cni-733680 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │                     │
	│ stop    │ -p newest-cni-733680 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ addons  │ enable dashboard -p newest-cni-733680 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ start   │ -p newest-cni-733680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ image   │ newest-cni-733680 image list --format=json                                                                                                                                                                                                    │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ pause   │ -p newest-cni-733680 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-403885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-403885 │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:49:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:49:44.309497  227721 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:49:44.309694  227721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:49:44.309721  227721 out.go:374] Setting ErrFile to fd 2...
	I0110 02:49:44.309742  227721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:49:44.310463  227721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:49:44.310909  227721 out.go:368] Setting JSON to false
	I0110 02:49:44.311837  227721 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5534,"bootTime":1768007851,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:49:44.311908  227721 start.go:143] virtualization:  
	I0110 02:49:44.314988  227721 out.go:179] * [newest-cni-733680] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:49:44.318901  227721 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:49:44.318981  227721 notify.go:221] Checking for updates...
	I0110 02:49:44.324985  227721 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:49:44.328078  227721 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:49:44.331185  227721 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:49:44.334208  227721 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:49:44.338487  227721 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:49:44.341862  227721 config.go:182] Loaded profile config "newest-cni-733680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:49:44.342521  227721 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:49:44.367410  227721 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:49:44.367524  227721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:49:44.444412  227721 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:49:44.433211036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:49:44.444531  227721 docker.go:319] overlay module found
	I0110 02:49:44.447876  227721 out.go:179] * Using the docker driver based on existing profile
	I0110 02:49:44.451345  227721 start.go:309] selected driver: docker
	I0110 02:49:44.451366  227721 start.go:928] validating driver "docker" against &{Name:newest-cni-733680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:49:44.451475  227721 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:49:44.452206  227721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:49:44.507205  227721 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:49:44.498529625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:49:44.507559  227721 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:49:44.507615  227721 cni.go:84] Creating CNI manager for ""
	I0110 02:49:44.507673  227721 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:49:44.507711  227721 start.go:353] cluster config:
	{Name:newest-cni-733680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:49:44.512606  227721 out.go:179] * Starting "newest-cni-733680" primary control-plane node in "newest-cni-733680" cluster
	I0110 02:49:44.515426  227721 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:49:44.518453  227721 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:49:44.521364  227721 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:49:44.521413  227721 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 02:49:44.521426  227721 cache.go:65] Caching tarball of preloaded images
	I0110 02:49:44.521435  227721 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:49:44.521506  227721 preload.go:251] Found /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 02:49:44.521516  227721 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:49:44.521629  227721 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/config.json ...
	I0110 02:49:44.541711  227721 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:49:44.541735  227721 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:49:44.541750  227721 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:49:44.541787  227721 start.go:360] acquireMachinesLock for newest-cni-733680: {Name:mkffafc06373cf7d630e08f2554eaef3a62ff5c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:49:44.541849  227721 start.go:364] duration metric: took 34.297µs to acquireMachinesLock for "newest-cni-733680"
	I0110 02:49:44.541873  227721 start.go:96] Skipping create...Using existing machine configuration
	I0110 02:49:44.541886  227721 fix.go:54] fixHost starting: 
	I0110 02:49:44.542210  227721 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:44.558798  227721 fix.go:112] recreateIfNeeded on newest-cni-733680: state=Stopped err=<nil>
	W0110 02:49:44.558830  227721 fix.go:138] unexpected machine state, will restart: <nil>
	W0110 02:49:44.679057  221603 node_ready.go:57] node "default-k8s-diff-port-403885" has "Ready":"False" status (will retry)
	W0110 02:49:47.175918  221603 node_ready.go:57] node "default-k8s-diff-port-403885" has "Ready":"False" status (will retry)
	I0110 02:49:44.562180  227721 out.go:252] * Restarting existing docker container for "newest-cni-733680" ...
	I0110 02:49:44.562266  227721 cli_runner.go:164] Run: docker start newest-cni-733680
	I0110 02:49:44.826055  227721 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:44.849274  227721 kic.go:430] container "newest-cni-733680" state is running.
	I0110 02:49:44.849680  227721 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-733680
	I0110 02:49:44.872507  227721 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/config.json ...
	I0110 02:49:44.872747  227721 machine.go:94] provisionDockerMachine start ...
	I0110 02:49:44.872811  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:44.892850  227721 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:44.893319  227721 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0110 02:49:44.893332  227721 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:49:44.893964  227721 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 02:49:48.067572  227721 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-733680
	
	I0110 02:49:48.067638  227721 ubuntu.go:182] provisioning hostname "newest-cni-733680"
	I0110 02:49:48.067727  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:48.098348  227721 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:48.098657  227721 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0110 02:49:48.098670  227721 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-733680 && echo "newest-cni-733680" | sudo tee /etc/hostname
	I0110 02:49:48.261145  227721 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-733680
	
	I0110 02:49:48.261217  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:48.281566  227721 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:48.281886  227721 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0110 02:49:48.281901  227721 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-733680' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-733680/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-733680' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:49:48.427922  227721 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:49:48.428009  227721 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 02:49:48.428044  227721 ubuntu.go:190] setting up certificates
	I0110 02:49:48.428085  227721 provision.go:84] configureAuth start
	I0110 02:49:48.428173  227721 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-733680
	I0110 02:49:48.444728  227721 provision.go:143] copyHostCerts
	I0110 02:49:48.444802  227721 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem, removing ...
	I0110 02:49:48.444824  227721 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:49:48.444902  227721 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 02:49:48.445010  227721 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem, removing ...
	I0110 02:49:48.445020  227721 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:49:48.445048  227721 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 02:49:48.445111  227721 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem, removing ...
	I0110 02:49:48.445119  227721 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:49:48.445144  227721 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 02:49:48.445221  227721 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.newest-cni-733680 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-733680]
	I0110 02:49:49.101821  227721 provision.go:177] copyRemoteCerts
	I0110 02:49:49.101923  227721 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:49:49.101988  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:49.119449  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:49.225157  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 02:49:49.258065  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:49:49.279139  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:49:49.297184  227721 provision.go:87] duration metric: took 869.064424ms to configureAuth
	I0110 02:49:49.297209  227721 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:49:49.297398  227721 config.go:182] Loaded profile config "newest-cni-733680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:49:49.297505  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:47.676475  221603 node_ready.go:49] node "default-k8s-diff-port-403885" is "Ready"
	I0110 02:49:47.676508  221603 node_ready.go:38] duration metric: took 12.003635661s for node "default-k8s-diff-port-403885" to be "Ready" ...
	I0110 02:49:47.676521  221603 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:49:47.676580  221603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:49:47.689566  221603 api_server.go:72] duration metric: took 13.671553329s to wait for apiserver process to appear ...
	I0110 02:49:47.689593  221603 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:49:47.689613  221603 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0110 02:49:47.698429  221603 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0110 02:49:47.699635  221603 api_server.go:141] control plane version: v1.35.0
	I0110 02:49:47.699659  221603 api_server.go:131] duration metric: took 10.058794ms to wait for apiserver health ...
	I0110 02:49:47.699668  221603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:49:47.702731  221603 system_pods.go:59] 8 kube-system pods found
	I0110 02:49:47.702773  221603 system_pods.go:61] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:49:47.702819  221603 system_pods.go:61] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running
	I0110 02:49:47.702828  221603 system_pods.go:61] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running
	I0110 02:49:47.702839  221603 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running
	I0110 02:49:47.702845  221603 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running
	I0110 02:49:47.702852  221603 system_pods.go:61] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running
	I0110 02:49:47.702875  221603 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running
	I0110 02:49:47.702889  221603 system_pods.go:61] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:49:47.702904  221603 system_pods.go:74] duration metric: took 3.222266ms to wait for pod list to return data ...
	I0110 02:49:47.702917  221603 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:49:47.705574  221603 default_sa.go:45] found service account: "default"
	I0110 02:49:47.705597  221603 default_sa.go:55] duration metric: took 2.67266ms for default service account to be created ...
	I0110 02:49:47.705606  221603 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:49:47.708460  221603 system_pods.go:86] 8 kube-system pods found
	I0110 02:49:47.708495  221603 system_pods.go:89] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:49:47.708502  221603 system_pods.go:89] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running
	I0110 02:49:47.708509  221603 system_pods.go:89] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running
	I0110 02:49:47.708514  221603 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running
	I0110 02:49:47.708519  221603 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running
	I0110 02:49:47.708545  221603 system_pods.go:89] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running
	I0110 02:49:47.708556  221603 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running
	I0110 02:49:47.708563  221603 system_pods.go:89] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:49:47.708593  221603 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0110 02:49:47.905691  221603 system_pods.go:86] 8 kube-system pods found
	I0110 02:49:47.905730  221603 system_pods.go:89] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:49:47.905763  221603 system_pods.go:89] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running
	I0110 02:49:47.905778  221603 system_pods.go:89] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running
	I0110 02:49:47.905784  221603 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running
	I0110 02:49:47.905789  221603 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running
	I0110 02:49:47.905797  221603 system_pods.go:89] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running
	I0110 02:49:47.905802  221603 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running
	I0110 02:49:47.905814  221603 system_pods.go:89] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:49:48.270742  221603 system_pods.go:86] 8 kube-system pods found
	I0110 02:49:48.270779  221603 system_pods.go:89] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:49:48.270787  221603 system_pods.go:89] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running
	I0110 02:49:48.270793  221603 system_pods.go:89] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running
	I0110 02:49:48.270803  221603 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running
	I0110 02:49:48.270808  221603 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running
	I0110 02:49:48.270816  221603 system_pods.go:89] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running
	I0110 02:49:48.270820  221603 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running
	I0110 02:49:48.270827  221603 system_pods.go:89] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:49:48.741393  221603 system_pods.go:86] 8 kube-system pods found
	I0110 02:49:48.741424  221603 system_pods.go:89] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:49:48.741431  221603 system_pods.go:89] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running
	I0110 02:49:48.741438  221603 system_pods.go:89] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running
	I0110 02:49:48.741443  221603 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running
	I0110 02:49:48.741448  221603 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running
	I0110 02:49:48.741452  221603 system_pods.go:89] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running
	I0110 02:49:48.741457  221603 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running
	I0110 02:49:48.741464  221603 system_pods.go:89] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:49:49.198944  221603 system_pods.go:86] 8 kube-system pods found
	I0110 02:49:49.198973  221603 system_pods.go:89] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Running
	I0110 02:49:49.198980  221603 system_pods.go:89] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running
	I0110 02:49:49.198987  221603 system_pods.go:89] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running
	I0110 02:49:49.198992  221603 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running
	I0110 02:49:49.198997  221603 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running
	I0110 02:49:49.199001  221603 system_pods.go:89] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running
	I0110 02:49:49.199006  221603 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running
	I0110 02:49:49.199010  221603 system_pods.go:89] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Running
	I0110 02:49:49.199017  221603 system_pods.go:126] duration metric: took 1.493406493s to wait for k8s-apps to be running ...
	I0110 02:49:49.199025  221603 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:49:49.199080  221603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:49:49.212882  221603 system_svc.go:56] duration metric: took 13.848142ms WaitForService to wait for kubelet
	I0110 02:49:49.212918  221603 kubeadm.go:587] duration metric: took 15.194907969s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:49:49.212963  221603 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:49:49.215938  221603 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:49:49.215966  221603 node_conditions.go:123] node cpu capacity is 2
	I0110 02:49:49.215982  221603 node_conditions.go:105] duration metric: took 3.011497ms to run NodePressure ...
	I0110 02:49:49.215995  221603 start.go:242] waiting for startup goroutines ...
	I0110 02:49:49.216002  221603 start.go:247] waiting for cluster config update ...
	I0110 02:49:49.216013  221603 start.go:256] writing updated cluster config ...
	I0110 02:49:49.216294  221603 ssh_runner.go:195] Run: rm -f paused
	I0110 02:49:49.222307  221603 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:49:49.226769  221603 pod_ready.go:83] waiting for pod "coredns-7d764666f9-sck2c" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.232583  221603 pod_ready.go:94] pod "coredns-7d764666f9-sck2c" is "Ready"
	I0110 02:49:49.232605  221603 pod_ready.go:86] duration metric: took 5.817167ms for pod "coredns-7d764666f9-sck2c" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.234917  221603 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.242710  221603 pod_ready.go:94] pod "etcd-default-k8s-diff-port-403885" is "Ready"
	I0110 02:49:49.242739  221603 pod_ready.go:86] duration metric: took 7.79565ms for pod "etcd-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.244995  221603 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.249365  221603 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-403885" is "Ready"
	I0110 02:49:49.249428  221603 pod_ready.go:86] duration metric: took 4.364446ms for pod "kube-apiserver-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.252107  221603 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.627051  221603 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-403885" is "Ready"
	I0110 02:49:49.627079  221603 pod_ready.go:86] duration metric: took 374.905825ms for pod "kube-controller-manager-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.826488  221603 pod_ready.go:83] waiting for pod "kube-proxy-ss9fs" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:50.226978  221603 pod_ready.go:94] pod "kube-proxy-ss9fs" is "Ready"
	I0110 02:49:50.227005  221603 pod_ready.go:86] duration metric: took 400.492397ms for pod "kube-proxy-ss9fs" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:50.426403  221603 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:50.826452  221603 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-403885" is "Ready"
	I0110 02:49:50.826483  221603 pod_ready.go:86] duration metric: took 400.056117ms for pod "kube-scheduler-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:50.826497  221603 pod_ready.go:40] duration metric: took 1.604160907s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:49:50.926905  221603 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 02:49:50.930644  221603 out.go:203] 
	W0110 02:49:50.934403  221603 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 02:49:50.937498  221603 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:49:50.941402  221603 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-403885" cluster and "default" namespace by default
	I0110 02:49:49.315667  227721 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:49.316004  227721 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0110 02:49:49.316019  227721 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:49:49.649599  227721 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:49:49.649624  227721 machine.go:97] duration metric: took 4.776867502s to provisionDockerMachine
	I0110 02:49:49.649636  227721 start.go:293] postStartSetup for "newest-cni-733680" (driver="docker")
	I0110 02:49:49.649646  227721 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:49:49.649726  227721 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:49:49.649768  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:49.669276  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:49.771913  227721 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:49:49.775059  227721 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:49:49.775090  227721 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:49:49.775102  227721 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:49:49.775155  227721 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:49:49.775244  227721 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:49:49.775348  227721 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:49:49.782784  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:49:49.799509  227721 start.go:296] duration metric: took 149.859431ms for postStartSetup
	I0110 02:49:49.799594  227721 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:49:49.799650  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:49.816376  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:49.916476  227721 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:49:49.920910  227721 fix.go:56] duration metric: took 5.379023632s for fixHost
	I0110 02:49:49.920936  227721 start.go:83] releasing machines lock for "newest-cni-733680", held for 5.379074707s
	I0110 02:49:49.921003  227721 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-733680
	I0110 02:49:49.937103  227721 ssh_runner.go:195] Run: cat /version.json
	I0110 02:49:49.937151  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:49.937169  227721 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:49:49.937220  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:49.956389  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:49.969424  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:50.165223  227721 ssh_runner.go:195] Run: systemctl --version
	I0110 02:49:50.171905  227721 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:49:50.208815  227721 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:49:50.213295  227721 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:49:50.213420  227721 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:49:50.221364  227721 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:49:50.221389  227721 start.go:496] detecting cgroup driver to use...
	I0110 02:49:50.221420  227721 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:49:50.221465  227721 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:49:50.237057  227721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:49:50.250249  227721 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:49:50.250330  227721 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:49:50.266528  227721 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:49:50.280285  227721 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:49:50.396685  227721 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:49:50.521003  227721 docker.go:234] disabling docker service ...
	I0110 02:49:50.521126  227721 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:49:50.540688  227721 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:49:50.556375  227721 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:49:50.683641  227721 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:49:50.808653  227721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:49:50.822460  227721 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:49:50.846923  227721 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:49:50.846995  227721 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.856884  227721 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:49:50.856953  227721 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.866475  227721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.875337  227721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.883923  227721 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:49:50.892014  227721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.901211  227721 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.909568  227721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.919195  227721 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:49:50.926804  227721 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:49:50.934752  227721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:49:51.133099  227721 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:49:51.350591  227721 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:49:51.350658  227721 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:49:51.354535  227721 start.go:574] Will wait 60s for crictl version
	I0110 02:49:51.354589  227721 ssh_runner.go:195] Run: which crictl
	I0110 02:49:51.358129  227721 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:49:51.386452  227721 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:49:51.386534  227721 ssh_runner.go:195] Run: crio --version
	I0110 02:49:51.413414  227721 ssh_runner.go:195] Run: crio --version
	I0110 02:49:51.446245  227721 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:49:51.449219  227721 cli_runner.go:164] Run: docker network inspect newest-cni-733680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:49:51.467434  227721 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:49:51.471068  227721 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:49:51.484080  227721 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0110 02:49:51.487038  227721 kubeadm.go:884] updating cluster {Name:newest-cni-733680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:49:51.487201  227721 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:49:51.487271  227721 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:49:51.530727  227721 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:49:51.530753  227721 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:49:51.530808  227721 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:49:51.559661  227721 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:49:51.559684  227721 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:49:51.559692  227721 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 02:49:51.559859  227721 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-733680 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:49:51.559960  227721 ssh_runner.go:195] Run: crio config
	I0110 02:49:51.649920  227721 cni.go:84] Creating CNI manager for ""
	I0110 02:49:51.649944  227721 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:49:51.649967  227721 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0110 02:49:51.649991  227721 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-733680 NodeName:newest-cni-733680 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:49:51.650121  227721 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-733680"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:49:51.650212  227721 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:49:51.657576  227721 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:49:51.657644  227721 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:49:51.664688  227721 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 02:49:51.677725  227721 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:49:51.691704  227721 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I0110 02:49:51.708568  227721 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:49:51.712042  227721 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:49:51.721970  227721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:49:51.852837  227721 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:49:51.873289  227721 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680 for IP: 192.168.76.2
	I0110 02:49:51.873311  227721 certs.go:195] generating shared ca certs ...
	I0110 02:49:51.873327  227721 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:51.873522  227721 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:49:51.873596  227721 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:49:51.873608  227721 certs.go:257] generating profile certs ...
	I0110 02:49:51.873727  227721 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/client.key
	I0110 02:49:51.873817  227721 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.key.aabe30f3
	I0110 02:49:51.873884  227721 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/proxy-client.key
	I0110 02:49:51.874016  227721 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:49:51.874066  227721 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:49:51.874083  227721 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:49:51.874130  227721 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:49:51.874180  227721 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:49:51.874225  227721 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:49:51.874306  227721 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:49:51.874941  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:49:51.892755  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:49:51.909930  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:49:51.927077  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:49:51.944783  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 02:49:51.962347  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 02:49:51.985129  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:49:52.012727  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 02:49:52.036283  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:49:52.069839  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:49:52.094664  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:49:52.127365  227721 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:49:52.145181  227721 ssh_runner.go:195] Run: openssl version
	I0110 02:49:52.151902  227721 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:52.160506  227721 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:49:52.168400  227721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:52.172018  227721 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:52.172119  227721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:52.214314  227721 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:49:52.222022  227721 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:49:52.229380  227721 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:49:52.236864  227721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:49:52.240516  227721 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:49:52.240627  227721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:49:52.281571  227721 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:49:52.289095  227721 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:49:52.296245  227721 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:49:52.303460  227721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:49:52.306912  227721 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:49:52.307011  227721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:49:52.349325  227721 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:49:52.356877  227721 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:49:52.360480  227721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:49:52.401232  227721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:49:52.442515  227721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:49:52.483877  227721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:49:52.525987  227721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:49:52.571183  227721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:49:52.629363  227721 kubeadm.go:401] StartCluster: {Name:newest-cni-733680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:49:52.629502  227721 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:49:52.629593  227721 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:49:52.707386  227721 cri.go:96] found id: "ea5078ffbb2ab2a6bbaa2c34fad84cb3a86c77b7c22d703c398c8f81af573168"
	I0110 02:49:52.707455  227721 cri.go:96] found id: "564644b28306f6bcee019ff3074f2c75514833f9bf107fbb9bc76939c4b892b7"
	I0110 02:49:52.707485  227721 cri.go:96] found id: "9ede343cb2e278e170c60426ec5533a577c1bde2bb263e32f4b5e061784803c7"
	I0110 02:49:52.707503  227721 cri.go:96] found id: "b2affd0966bb9dbc433519da5f2d62205ba805d51c020dd49252bd59427932dc"
	I0110 02:49:52.707529  227721 cri.go:96] found id: ""
	I0110 02:49:52.707627  227721 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 02:49:52.732021  227721 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:49:52Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:49:52.732136  227721 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:49:52.742812  227721 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:49:52.742890  227721 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:49:52.742967  227721 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:49:52.755410  227721 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:49:52.756033  227721 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-733680" does not appear in /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:49:52.756367  227721 kubeconfig.go:62] /home/jenkins/minikube-integration/22414-2353/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-733680" cluster setting kubeconfig missing "newest-cni-733680" context setting]
	I0110 02:49:52.756808  227721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:52.758406  227721 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:49:52.770028  227721 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 02:49:52.770062  227721 kubeadm.go:602] duration metric: took 27.152503ms to restartPrimaryControlPlane
	I0110 02:49:52.770090  227721 kubeadm.go:403] duration metric: took 140.747566ms to StartCluster
	I0110 02:49:52.770112  227721 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:52.770199  227721 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:49:52.771113  227721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:52.771349  227721 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:49:52.771738  227721 config.go:182] Loaded profile config "newest-cni-733680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:49:52.771722  227721 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:49:52.771844  227721 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-733680"
	I0110 02:49:52.771860  227721 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-733680"
	W0110 02:49:52.771866  227721 addons.go:248] addon storage-provisioner should already be in state true
	I0110 02:49:52.771875  227721 addons.go:70] Setting dashboard=true in profile "newest-cni-733680"
	I0110 02:49:52.771890  227721 host.go:66] Checking if "newest-cni-733680" exists ...
	I0110 02:49:52.771898  227721 addons.go:70] Setting default-storageclass=true in profile "newest-cni-733680"
	I0110 02:49:52.771909  227721 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-733680"
	I0110 02:49:52.772185  227721 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:52.772324  227721 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:52.771890  227721 addons.go:239] Setting addon dashboard=true in "newest-cni-733680"
	W0110 02:49:52.772704  227721 addons.go:248] addon dashboard should already be in state true
	I0110 02:49:52.772782  227721 host.go:66] Checking if "newest-cni-733680" exists ...
	I0110 02:49:52.773237  227721 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:52.777211  227721 out.go:179] * Verifying Kubernetes components...
	I0110 02:49:52.780387  227721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:49:52.826239  227721 addons.go:239] Setting addon default-storageclass=true in "newest-cni-733680"
	W0110 02:49:52.826259  227721 addons.go:248] addon default-storageclass should already be in state true
	I0110 02:49:52.826283  227721 host.go:66] Checking if "newest-cni-733680" exists ...
	I0110 02:49:52.826683  227721 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:52.834786  227721 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:49:52.838077  227721 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:49:52.838103  227721 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:49:52.838172  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:52.850062  227721 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 02:49:52.853327  227721 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 02:49:52.858599  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 02:49:52.858622  227721 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 02:49:52.858689  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:52.875042  227721 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:49:52.875065  227721 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:49:52.875142  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:52.895967  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:52.918908  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:52.923631  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:53.067602  227721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:49:53.205587  227721 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:49:53.237603  227721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:49:53.240836  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 02:49:53.240858  227721 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 02:49:53.295176  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 02:49:53.295248  227721 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 02:49:53.350797  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 02:49:53.350872  227721 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 02:49:53.411170  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 02:49:53.411250  227721 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 02:49:53.461048  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 02:49:53.461124  227721 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 02:49:53.492736  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 02:49:53.492809  227721 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 02:49:53.513750  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 02:49:53.513845  227721 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 02:49:53.556492  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 02:49:53.556562  227721 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 02:49:53.581438  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:49:53.581506  227721 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 02:49:53.613630  227721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:49:57.046329  227721 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.978687072s)
	I0110 02:49:57.046422  227721 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.84080946s)
	I0110 02:49:57.046637  227721 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:49:57.046449  227721 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.808826332s)
	I0110 02:49:57.046542  227721 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.432840606s)
	I0110 02:49:57.047187  227721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:49:57.050038  227721 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-733680 addons enable metrics-server
	
	I0110 02:49:57.071486  227721 api_server.go:72] duration metric: took 4.300106932s to wait for apiserver process to appear ...
	I0110 02:49:57.071509  227721 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:49:57.071527  227721 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:49:57.080089  227721 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 02:49:57.080120  227721 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 02:49:57.093756  227721 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 02:49:57.096511  227721 addons.go:530] duration metric: took 4.324781462s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 02:49:57.572427  227721 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:49:57.586847  227721 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:49:57.587967  227721 api_server.go:141] control plane version: v1.35.0
	I0110 02:49:57.588028  227721 api_server.go:131] duration metric: took 516.494969ms to wait for apiserver health ...
	I0110 02:49:57.588047  227721 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:49:57.592332  227721 system_pods.go:59] 8 kube-system pods found
	I0110 02:49:57.592387  227721 system_pods.go:61] "coredns-7d764666f9-7djps" [5b40a1e2-6d92-4e33-8a94-19e45bc18937] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 02:49:57.592406  227721 system_pods.go:61] "etcd-newest-cni-733680" [15903ffb-9c75-402b-aaf1-2ea433e993a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:49:57.592415  227721 system_pods.go:61] "kindnet-bnwfz" [49fa87e1-f1d9-4315-918f-b079caade618] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 02:49:57.592429  227721 system_pods.go:61] "kube-apiserver-newest-cni-733680" [7daa7cb6-3a2f-4d11-aadd-c7aab970ff4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:49:57.592437  227721 system_pods.go:61] "kube-controller-manager-newest-cni-733680" [f5a37a0a-47e7-41e5-a01f-16efb8c43166] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:49:57.592476  227721 system_pods.go:61] "kube-proxy-mnr64" [0f5cc6b8-6364-4711-a838-fb70b057c4ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 02:49:57.592490  227721 system_pods.go:61] "kube-scheduler-newest-cni-733680" [d5f0e1ba-9a06-4172-a5ec-140a553a47ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:49:57.592502  227721 system_pods.go:61] "storage-provisioner" [1a9287f6-2918-4098-8444-2f1c2c4dda71] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 02:49:57.592514  227721 system_pods.go:74] duration metric: took 4.460764ms to wait for pod list to return data ...
	I0110 02:49:57.592543  227721 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:49:57.594834  227721 default_sa.go:45] found service account: "default"
	I0110 02:49:57.594857  227721 default_sa.go:55] duration metric: took 2.300878ms for default service account to be created ...
	I0110 02:49:57.594870  227721 kubeadm.go:587] duration metric: took 4.823494248s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:49:57.594915  227721 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:49:57.597527  227721 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:49:57.597564  227721 node_conditions.go:123] node cpu capacity is 2
	I0110 02:49:57.597577  227721 node_conditions.go:105] duration metric: took 2.655954ms to run NodePressure ...
	I0110 02:49:57.597610  227721 start.go:242] waiting for startup goroutines ...
	I0110 02:49:57.597619  227721 start.go:247] waiting for cluster config update ...
	I0110 02:49:57.597664  227721 start.go:256] writing updated cluster config ...
	I0110 02:49:57.597984  227721 ssh_runner.go:195] Run: rm -f paused
	I0110 02:49:57.679374  227721 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 02:49:57.682549  227721 out.go:203] 
	W0110 02:49:57.685626  227721 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 02:49:57.690644  227721 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:49:57.693501  227721 out.go:179] * Done! kubectl is now configured to use "newest-cni-733680" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.332502905Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.341422332Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=fd1f185d-34fd-4fd6-ad98-ed5fb7ae718a name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.341770097Z" level=info msg="Running pod sandbox: kube-system/kindnet-bnwfz/POD" id=803b57a9-2332-4933-ad48-7d3fb61240aa name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.341842013Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.345130369Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=803b57a9-2332-4933-ad48-7d3fb61240aa name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.348856597Z" level=info msg="Ran pod sandbox 4e5a829f07b47389342609b465c7f583b2849a5bf78f39ea7a091a61a15c1752 with infra container: kube-system/kindnet-bnwfz/POD" id=803b57a9-2332-4933-ad48-7d3fb61240aa name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.353020253Z" level=info msg="Ran pod sandbox 9502b267985b589357f22c19d88e434650d9c3289a00a07ea2b1f8dc5a5e8e6f with infra container: kube-system/kube-proxy-mnr64/POD" id=fd1f185d-34fd-4fd6-ad98-ed5fb7ae718a name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.35615082Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=bec97807-8c02-4b7f-9073-f8aeff2fc48f name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.356422033Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=273f83ee-ec4a-4747-84a9-9d09fe3bcd07 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.358783522Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=aafba74b-c63d-42a0-8291-75fbe9eeefcc name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.358931571Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=b0849b49-f4f0-426c-ba68-0924473e9aad name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.360167772Z" level=info msg="Creating container: kube-system/kube-proxy-mnr64/kube-proxy" id=e1d3fb54-374c-4544-86f5-03a639cb5c37 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.360341388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.360191148Z" level=info msg="Creating container: kube-system/kindnet-bnwfz/kindnet-cni" id=4a17c2df-916a-4634-aeda-f9032ddda821 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.360585246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.365687199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.366183391Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.36824412Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.368879303Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.385414767Z" level=info msg="Created container f3115642cb9635a0a0fb487d418fee931ed516febc3248ca4c87c80d330b4a87: kube-system/kindnet-bnwfz/kindnet-cni" id=4a17c2df-916a-4634-aeda-f9032ddda821 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.388410454Z" level=info msg="Created container d50472bf13056b006d7be9a47a3b5f1710e956b40dc97490122f62498e3e741b: kube-system/kube-proxy-mnr64/kube-proxy" id=e1d3fb54-374c-4544-86f5-03a639cb5c37 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.389520021Z" level=info msg="Starting container: f3115642cb9635a0a0fb487d418fee931ed516febc3248ca4c87c80d330b4a87" id=a0b58415-9b71-44cf-ab92-ace098aef13f name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.389760646Z" level=info msg="Starting container: d50472bf13056b006d7be9a47a3b5f1710e956b40dc97490122f62498e3e741b" id=a3be81b8-c114-4e1f-9eb2-980ddda21736 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.391417669Z" level=info msg="Started container" PID=1069 containerID=f3115642cb9635a0a0fb487d418fee931ed516febc3248ca4c87c80d330b4a87 description=kube-system/kindnet-bnwfz/kindnet-cni id=a0b58415-9b71-44cf-ab92-ace098aef13f name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e5a829f07b47389342609b465c7f583b2849a5bf78f39ea7a091a61a15c1752
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.395209371Z" level=info msg="Started container" PID=1073 containerID=d50472bf13056b006d7be9a47a3b5f1710e956b40dc97490122f62498e3e741b description=kube-system/kube-proxy-mnr64/kube-proxy id=a3be81b8-c114-4e1f-9eb2-980ddda21736 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9502b267985b589357f22c19d88e434650d9c3289a00a07ea2b1f8dc5a5e8e6f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d50472bf13056       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   4 seconds ago       Running             kube-proxy                1                   9502b267985b5       kube-proxy-mnr64                            kube-system
	f3115642cb963       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   4 seconds ago       Running             kindnet-cni               1                   4e5a829f07b47       kindnet-bnwfz                               kube-system
	ea5078ffbb2ab       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   9 seconds ago       Running             kube-controller-manager   1                   78b4c2ed69206       kube-controller-manager-newest-cni-733680   kube-system
	564644b28306f       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   9 seconds ago       Running             etcd                      1                   ad62c517f63a6       etcd-newest-cni-733680                      kube-system
	9ede343cb2e27       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   9 seconds ago       Running             kube-scheduler            1                   78db093b1df3c       kube-scheduler-newest-cni-733680            kube-system
	b2affd0966bb9       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   9 seconds ago       Running             kube-apiserver            1                   a35b014e338ff       kube-apiserver-newest-cni-733680            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-733680
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-733680
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=newest-cni-733680
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_49_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:49:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-733680
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:49:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:49:56 +0000   Sat, 10 Jan 2026 02:49:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:49:56 +0000   Sat, 10 Jan 2026 02:49:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:49:56 +0000   Sat, 10 Jan 2026 02:49:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 10 Jan 2026 02:49:56 +0000   Sat, 10 Jan 2026 02:49:26 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-733680
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                c296da09-1974-40e8-a0f3-9e9b1313e8dc
	  Boot ID:                    41f52236-76b9-4dc8-bfe3-121fbdeb9659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-733680                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         30s
	  kube-system                 kindnet-bnwfz                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-newest-cni-733680             250m (12%)    0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-newest-cni-733680    200m (10%)    0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-mnr64                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-newest-cni-733680             100m (5%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node newest-cni-733680 event: Registered Node newest-cni-733680 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-733680 event: Registered Node newest-cni-733680 in Controller
	
	
	==> dmesg <==
	[Jan10 02:17] overlayfs: idmapped layers are currently not supported
	[Jan10 02:18] overlayfs: idmapped layers are currently not supported
	[ +23.212409] overlayfs: idmapped layers are currently not supported
	[  +9.866169] overlayfs: idmapped layers are currently not supported
	[Jan10 02:19] overlayfs: idmapped layers are currently not supported
	[Jan10 02:20] overlayfs: idmapped layers are currently not supported
	[ +35.960240] overlayfs: idmapped layers are currently not supported
	[Jan10 02:21] overlayfs: idmapped layers are currently not supported
	[Jan10 02:23] overlayfs: idmapped layers are currently not supported
	[Jan10 02:24] overlayfs: idmapped layers are currently not supported
	[Jan10 02:25] overlayfs: idmapped layers are currently not supported
	[Jan10 02:30] overlayfs: idmapped layers are currently not supported
	[Jan10 02:31] overlayfs: idmapped layers are currently not supported
	[Jan10 02:35] overlayfs: idmapped layers are currently not supported
	[Jan10 02:37] overlayfs: idmapped layers are currently not supported
	[Jan10 02:41] overlayfs: idmapped layers are currently not supported
	[ +37.534021] overlayfs: idmapped layers are currently not supported
	[Jan10 02:43] overlayfs: idmapped layers are currently not supported
	[Jan10 02:44] overlayfs: idmapped layers are currently not supported
	[Jan10 02:45] overlayfs: idmapped layers are currently not supported
	[Jan10 02:46] overlayfs: idmapped layers are currently not supported
	[Jan10 02:48] overlayfs: idmapped layers are currently not supported
	[Jan10 02:49] overlayfs: idmapped layers are currently not supported
	[  +4.690964] overlayfs: idmapped layers are currently not supported
	[ +26.361261] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [564644b28306f6bcee019ff3074f2c75514833f9bf107fbb9bc76939c4b892b7] <==
	{"level":"info","ts":"2026-01-10T02:49:53.142290Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:49:53.142384Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:49:53.147912Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:49:53.147949Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:49:53.148965Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T02:49:53.149043Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:49:53.149117Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T02:49:53.551836Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:49:53.551967Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:49:53.552048Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:49:53.552089Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:49:53.552132Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:49:53.556690Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:49:53.556781Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:49:53.556827Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:49:53.556864Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:49:53.559546Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-733680 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:49:53.562361Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:49:53.563358Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:49:53.581904Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:49:53.582061Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:49:53.582074Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:49:53.582721Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:49:53.646788Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T02:49:53.656609Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 02:50:02 up  1:32,  0 user,  load average: 3.94, 2.73, 2.16
	Linux newest-cni-733680 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f3115642cb9635a0a0fb487d418fee931ed516febc3248ca4c87c80d330b4a87] <==
	I0110 02:49:57.525787       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:49:57.526315       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 02:49:57.526458       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:49:57.526481       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:49:57.526492       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:49:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:49:57.725282       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:49:57.725303       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:49:57.725318       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:49:57.726454       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [b2affd0966bb9dbc433519da5f2d62205ba805d51c020dd49252bd59427932dc] <==
	I0110 02:49:56.047699       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 02:49:56.048501       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 02:49:56.054353       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:56.054381       1 policy_source.go:248] refreshing policies
	I0110 02:49:56.057701       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:49:56.057735       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:56.057785       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 02:49:56.058371       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 02:49:56.100310       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 02:49:56.116677       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 02:49:56.146696       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:56.197136       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E0110 02:49:56.202146       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 02:49:56.648975       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:49:56.702086       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:49:56.707092       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:49:56.748366       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:49:56.765966       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:49:56.779979       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:49:56.855430       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.222.248"}
	I0110 02:49:56.893053       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.185.9"}
	I0110 02:49:59.321962       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:49:59.521900       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:49:59.572260       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:49:59.722185       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [ea5078ffbb2ab2a6bbaa2c34fad84cb3a86c77b7c22d703c398c8f81af573168] <==
	I0110 02:49:59.001281       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.001329       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.001426       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.001549       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.001566       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.001606       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.001639       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.007543       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.007639       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.007696       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.007717       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.007956       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.008053       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 02:49:59.008145       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-733680"
	I0110 02:49:59.008202       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0110 02:49:59.008224       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.008264       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.008310       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.008554       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.019101       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.036858       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.083392       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.089781       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.089917       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:49:59.089949       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [d50472bf13056b006d7be9a47a3b5f1710e956b40dc97490122f62498e3e741b] <==
	I0110 02:49:57.441356       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:49:57.527020       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:49:57.628298       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:57.628414       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 02:49:57.628540       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:49:57.652737       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:49:57.652798       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:49:57.656470       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:49:57.656873       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:49:57.656893       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:49:57.661779       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:49:57.661849       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:49:57.662162       1 config.go:200] "Starting service config controller"
	I0110 02:49:57.662222       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:49:57.662566       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:49:57.662607       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:49:57.663046       1 config.go:309] "Starting node config controller"
	I0110 02:49:57.663090       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:49:57.663119       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:49:57.763929       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:49:57.763975       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 02:49:57.764013       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [9ede343cb2e278e170c60426ec5533a577c1bde2bb263e32f4b5e061784803c7] <==
	I0110 02:49:54.173375       1 serving.go:386] Generated self-signed cert in-memory
	W0110 02:49:55.909645       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 02:49:55.909752       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 02:49:55.909787       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 02:49:55.909828       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 02:49:56.064297       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 02:49:56.064333       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:49:56.066695       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 02:49:56.066827       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:49:56.066837       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:49:56.066853       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 02:49:56.266907       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: I0110 02:49:56.220360     736 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-733680"
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: I0110 02:49:56.220419     736 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: I0110 02:49:56.222169     736 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: E0110 02:49:56.249718     736 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-733680\" already exists" pod="kube-system/etcd-newest-cni-733680"
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: I0110 02:49:56.249752     736 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-733680"
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: E0110 02:49:56.273485     736 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-733680\" already exists" pod="kube-system/kube-apiserver-newest-cni-733680"
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: I0110 02:49:56.273528     736 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-733680"
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: E0110 02:49:56.298682     736 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-733680\" already exists" pod="kube-system/kube-controller-manager-newest-cni-733680"
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: I0110 02:49:56.298727     736 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-733680"
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: E0110 02:49:56.312314     736 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-733680\" already exists" pod="kube-system/kube-scheduler-newest-cni-733680"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: I0110 02:49:57.002345     736 apiserver.go:52] "Watching apiserver"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: E0110 02:49:57.026013     736 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-733680" containerName="etcd"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: E0110 02:49:57.026526     736 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-733680" containerName="kube-scheduler"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: E0110 02:49:57.026928     736 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-733680" containerName="kube-apiserver"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: E0110 02:49:57.027330     736 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-733680" containerName="kube-controller-manager"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: I0110 02:49:57.112096     736 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: I0110 02:49:57.193186     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/49fa87e1-f1d9-4315-918f-b079caade618-cni-cfg\") pod \"kindnet-bnwfz\" (UID: \"49fa87e1-f1d9-4315-918f-b079caade618\") " pod="kube-system/kindnet-bnwfz"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: I0110 02:49:57.193387     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49fa87e1-f1d9-4315-918f-b079caade618-xtables-lock\") pod \"kindnet-bnwfz\" (UID: \"49fa87e1-f1d9-4315-918f-b079caade618\") " pod="kube-system/kindnet-bnwfz"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: I0110 02:49:57.193506     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f5cc6b8-6364-4711-a838-fb70b057c4ef-xtables-lock\") pod \"kube-proxy-mnr64\" (UID: \"0f5cc6b8-6364-4711-a838-fb70b057c4ef\") " pod="kube-system/kube-proxy-mnr64"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: I0110 02:49:57.193599     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f5cc6b8-6364-4711-a838-fb70b057c4ef-lib-modules\") pod \"kube-proxy-mnr64\" (UID: \"0f5cc6b8-6364-4711-a838-fb70b057c4ef\") " pod="kube-system/kube-proxy-mnr64"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: I0110 02:49:57.193703     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49fa87e1-f1d9-4315-918f-b079caade618-lib-modules\") pod \"kindnet-bnwfz\" (UID: \"49fa87e1-f1d9-4315-918f-b079caade618\") " pod="kube-system/kindnet-bnwfz"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: I0110 02:49:57.204794     736 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jan 10 02:49:58 newest-cni-733680 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:49:58 newest-cni-733680 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:49:58 newest-cni-733680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-733680 -n newest-cni-733680
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-733680 -n newest-cni-733680: exit status 2 (471.082638ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-733680 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-7djps storage-provisioner dashboard-metrics-scraper-867fb5f87b-vxw7r kubernetes-dashboard-b84665fb8-kqjbx
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-733680 describe pod coredns-7d764666f9-7djps storage-provisioner dashboard-metrics-scraper-867fb5f87b-vxw7r kubernetes-dashboard-b84665fb8-kqjbx
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-733680 describe pod coredns-7d764666f9-7djps storage-provisioner dashboard-metrics-scraper-867fb5f87b-vxw7r kubernetes-dashboard-b84665fb8-kqjbx: exit status 1 (125.530159ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-7djps" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-vxw7r" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-kqjbx" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-733680 describe pod coredns-7d764666f9-7djps storage-provisioner dashboard-metrics-scraper-867fb5f87b-vxw7r kubernetes-dashboard-b84665fb8-kqjbx: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-733680
helpers_test.go:244: (dbg) docker inspect newest-cni-733680:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb",
	        "Created": "2026-01-10T02:49:13.990665872Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 227850,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:49:44.598785864Z",
	            "FinishedAt": "2026-01-10T02:49:43.793795697Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb/hostname",
	        "HostsPath": "/var/lib/docker/containers/332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb/hosts",
	        "LogPath": "/var/lib/docker/containers/332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb/332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb-json.log",
	        "Name": "/newest-cni-733680",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-733680:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-733680",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "332f4ab8cb32b25616ea970bfb5fb05bb610d82c68c84ff5df6ca1a0461deecb",
	                "LowerDir": "/var/lib/docker/overlay2/84cf567e665ab8640c65dbd3cc3a50ebdb994b7629de1cf36f4ce8e5f0c36014-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/84cf567e665ab8640c65dbd3cc3a50ebdb994b7629de1cf36f4ce8e5f0c36014/merged",
	                "UpperDir": "/var/lib/docker/overlay2/84cf567e665ab8640c65dbd3cc3a50ebdb994b7629de1cf36f4ce8e5f0c36014/diff",
	                "WorkDir": "/var/lib/docker/overlay2/84cf567e665ab8640c65dbd3cc3a50ebdb994b7629de1cf36f4ce8e5f0c36014/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-733680",
	                "Source": "/var/lib/docker/volumes/newest-cni-733680/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-733680",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-733680",
	                "name.minikube.sigs.k8s.io": "newest-cni-733680",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d6bbe9c8a2a75107074b9ca17c9c433e43c618ae4b8f1048e2645d94dc5161ab",
	            "SandboxKey": "/var/run/docker/netns/d6bbe9c8a2a7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-733680": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:29:39:1a:e1:62",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c13b302fa5016d01187ac7a2edef31e75f6720c21560b52b4739e7f7514c4136",
	                    "EndpointID": "7dcd9a50e728c6dcac304e464dc877f6a39d5452440b0d27ca43ff51aa7d853c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-733680",
	                        "332f4ab8cb32"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-733680 -n newest-cni-733680
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-733680 -n newest-cni-733680: exit status 2 (474.9915ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-733680 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-733680 logs -n 25: (1.189779307s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p embed-certs-290628 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │                     │
	│ delete  │ -p embed-certs-290628                                                                                                                                                                                                                         │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ delete  │ -p embed-certs-290628                                                                                                                                                                                                                         │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ delete  │ -p disable-driver-mounts-990753                                                                                                                                                                                                               │ disable-driver-mounts-990753 │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ start   │ -p no-preload-676905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:47 UTC │
	│ addons  │ enable metrics-server -p no-preload-676905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │                     │
	│ stop    │ -p no-preload-676905 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:47 UTC │
	│ addons  │ enable dashboard -p no-preload-676905 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:47 UTC │
	│ start   │ -p no-preload-676905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:48 UTC │
	│ image   │ no-preload-676905 image list --format=json                                                                                                                                                                                                    │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │ 10 Jan 26 02:48 UTC │
	│ pause   │ -p no-preload-676905 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │                     │
	│ ssh     │ force-systemd-flag-038359 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-038359    │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │ 10 Jan 26 02:48 UTC │
	│ delete  │ -p force-systemd-flag-038359                                                                                                                                                                                                                  │ force-systemd-flag-038359    │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │ 10 Jan 26 02:49 UTC │
	│ start   │ -p default-k8s-diff-port-403885 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-403885 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ delete  │ -p no-preload-676905                                                                                                                                                                                                                          │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ delete  │ -p no-preload-676905                                                                                                                                                                                                                          │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ start   │ -p newest-cni-733680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ addons  │ enable metrics-server -p newest-cni-733680 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │                     │
	│ stop    │ -p newest-cni-733680 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ addons  │ enable dashboard -p newest-cni-733680 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ start   │ -p newest-cni-733680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ image   │ newest-cni-733680 image list --format=json                                                                                                                                                                                                    │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ pause   │ -p newest-cni-733680 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-403885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-403885 │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-403885 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-403885 │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:49:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:49:44.309497  227721 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:49:44.309694  227721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:49:44.309721  227721 out.go:374] Setting ErrFile to fd 2...
	I0110 02:49:44.309742  227721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:49:44.310463  227721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:49:44.310909  227721 out.go:368] Setting JSON to false
	I0110 02:49:44.311837  227721 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5534,"bootTime":1768007851,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:49:44.311908  227721 start.go:143] virtualization:  
	I0110 02:49:44.314988  227721 out.go:179] * [newest-cni-733680] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:49:44.318901  227721 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:49:44.318981  227721 notify.go:221] Checking for updates...
	I0110 02:49:44.324985  227721 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:49:44.328078  227721 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:49:44.331185  227721 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:49:44.334208  227721 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:49:44.338487  227721 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:49:44.341862  227721 config.go:182] Loaded profile config "newest-cni-733680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:49:44.342521  227721 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:49:44.367410  227721 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:49:44.367524  227721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:49:44.444412  227721 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:49:44.433211036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:49:44.444531  227721 docker.go:319] overlay module found
	I0110 02:49:44.447876  227721 out.go:179] * Using the docker driver based on existing profile
	I0110 02:49:44.451345  227721 start.go:309] selected driver: docker
	I0110 02:49:44.451366  227721 start.go:928] validating driver "docker" against &{Name:newest-cni-733680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:49:44.451475  227721 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:49:44.452206  227721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:49:44.507205  227721 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:49:44.498529625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:49:44.507559  227721 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:49:44.507615  227721 cni.go:84] Creating CNI manager for ""
	I0110 02:49:44.507673  227721 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:49:44.507711  227721 start.go:353] cluster config:
	{Name:newest-cni-733680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:49:44.512606  227721 out.go:179] * Starting "newest-cni-733680" primary control-plane node in "newest-cni-733680" cluster
	I0110 02:49:44.515426  227721 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:49:44.518453  227721 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:49:44.521364  227721 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:49:44.521413  227721 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 02:49:44.521426  227721 cache.go:65] Caching tarball of preloaded images
	I0110 02:49:44.521435  227721 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:49:44.521506  227721 preload.go:251] Found /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 02:49:44.521516  227721 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:49:44.521629  227721 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/config.json ...
	I0110 02:49:44.541711  227721 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:49:44.541735  227721 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:49:44.541750  227721 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:49:44.541787  227721 start.go:360] acquireMachinesLock for newest-cni-733680: {Name:mkffafc06373cf7d630e08f2554eaef3a62ff5c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:49:44.541849  227721 start.go:364] duration metric: took 34.297µs to acquireMachinesLock for "newest-cni-733680"
	I0110 02:49:44.541873  227721 start.go:96] Skipping create...Using existing machine configuration
	I0110 02:49:44.541886  227721 fix.go:54] fixHost starting: 
	I0110 02:49:44.542210  227721 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:44.558798  227721 fix.go:112] recreateIfNeeded on newest-cni-733680: state=Stopped err=<nil>
	W0110 02:49:44.558830  227721 fix.go:138] unexpected machine state, will restart: <nil>
	W0110 02:49:44.679057  221603 node_ready.go:57] node "default-k8s-diff-port-403885" has "Ready":"False" status (will retry)
	W0110 02:49:47.175918  221603 node_ready.go:57] node "default-k8s-diff-port-403885" has "Ready":"False" status (will retry)
	I0110 02:49:44.562180  227721 out.go:252] * Restarting existing docker container for "newest-cni-733680" ...
	I0110 02:49:44.562266  227721 cli_runner.go:164] Run: docker start newest-cni-733680
	I0110 02:49:44.826055  227721 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:44.849274  227721 kic.go:430] container "newest-cni-733680" state is running.
	I0110 02:49:44.849680  227721 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-733680
	I0110 02:49:44.872507  227721 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/config.json ...
	I0110 02:49:44.872747  227721 machine.go:94] provisionDockerMachine start ...
	I0110 02:49:44.872811  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:44.892850  227721 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:44.893319  227721 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0110 02:49:44.893332  227721 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:49:44.893964  227721 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 02:49:48.067572  227721 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-733680
	
	I0110 02:49:48.067638  227721 ubuntu.go:182] provisioning hostname "newest-cni-733680"
	I0110 02:49:48.067727  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:48.098348  227721 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:48.098657  227721 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0110 02:49:48.098670  227721 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-733680 && echo "newest-cni-733680" | sudo tee /etc/hostname
	I0110 02:49:48.261145  227721 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-733680
	
	I0110 02:49:48.261217  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:48.281566  227721 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:48.281886  227721 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0110 02:49:48.281901  227721 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-733680' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-733680/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-733680' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:49:48.427922  227721 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:49:48.428009  227721 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 02:49:48.428044  227721 ubuntu.go:190] setting up certificates
	I0110 02:49:48.428085  227721 provision.go:84] configureAuth start
	I0110 02:49:48.428173  227721 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-733680
	I0110 02:49:48.444728  227721 provision.go:143] copyHostCerts
	I0110 02:49:48.444802  227721 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem, removing ...
	I0110 02:49:48.444824  227721 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:49:48.444902  227721 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 02:49:48.445010  227721 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem, removing ...
	I0110 02:49:48.445020  227721 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:49:48.445048  227721 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 02:49:48.445111  227721 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem, removing ...
	I0110 02:49:48.445119  227721 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:49:48.445144  227721 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 02:49:48.445221  227721 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.newest-cni-733680 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-733680]
	I0110 02:49:49.101821  227721 provision.go:177] copyRemoteCerts
	I0110 02:49:49.101923  227721 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:49:49.101988  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:49.119449  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:49.225157  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 02:49:49.258065  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:49:49.279139  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:49:49.297184  227721 provision.go:87] duration metric: took 869.064424ms to configureAuth
	I0110 02:49:49.297209  227721 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:49:49.297398  227721 config.go:182] Loaded profile config "newest-cni-733680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:49:49.297505  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:47.676475  221603 node_ready.go:49] node "default-k8s-diff-port-403885" is "Ready"
	I0110 02:49:47.676508  221603 node_ready.go:38] duration metric: took 12.003635661s for node "default-k8s-diff-port-403885" to be "Ready" ...
	I0110 02:49:47.676521  221603 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:49:47.676580  221603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:49:47.689566  221603 api_server.go:72] duration metric: took 13.671553329s to wait for apiserver process to appear ...
	I0110 02:49:47.689593  221603 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:49:47.689613  221603 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0110 02:49:47.698429  221603 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0110 02:49:47.699635  221603 api_server.go:141] control plane version: v1.35.0
	I0110 02:49:47.699659  221603 api_server.go:131] duration metric: took 10.058794ms to wait for apiserver health ...
	I0110 02:49:47.699668  221603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:49:47.702731  221603 system_pods.go:59] 8 kube-system pods found
	I0110 02:49:47.702773  221603 system_pods.go:61] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:49:47.702819  221603 system_pods.go:61] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running
	I0110 02:49:47.702828  221603 system_pods.go:61] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running
	I0110 02:49:47.702839  221603 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running
	I0110 02:49:47.702845  221603 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running
	I0110 02:49:47.702852  221603 system_pods.go:61] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running
	I0110 02:49:47.702875  221603 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running
	I0110 02:49:47.702889  221603 system_pods.go:61] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:49:47.702904  221603 system_pods.go:74] duration metric: took 3.222266ms to wait for pod list to return data ...
	I0110 02:49:47.702917  221603 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:49:47.705574  221603 default_sa.go:45] found service account: "default"
	I0110 02:49:47.705597  221603 default_sa.go:55] duration metric: took 2.67266ms for default service account to be created ...
	I0110 02:49:47.705606  221603 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:49:47.708460  221603 system_pods.go:86] 8 kube-system pods found
	I0110 02:49:47.708495  221603 system_pods.go:89] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:49:47.708502  221603 system_pods.go:89] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running
	I0110 02:49:47.708509  221603 system_pods.go:89] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running
	I0110 02:49:47.708514  221603 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running
	I0110 02:49:47.708519  221603 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running
	I0110 02:49:47.708545  221603 system_pods.go:89] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running
	I0110 02:49:47.708556  221603 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running
	I0110 02:49:47.708563  221603 system_pods.go:89] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:49:47.708593  221603 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0110 02:49:47.905691  221603 system_pods.go:86] 8 kube-system pods found
	I0110 02:49:47.905730  221603 system_pods.go:89] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:49:47.905763  221603 system_pods.go:89] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running
	I0110 02:49:47.905778  221603 system_pods.go:89] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running
	I0110 02:49:47.905784  221603 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running
	I0110 02:49:47.905789  221603 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running
	I0110 02:49:47.905797  221603 system_pods.go:89] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running
	I0110 02:49:47.905802  221603 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running
	I0110 02:49:47.905814  221603 system_pods.go:89] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:49:48.270742  221603 system_pods.go:86] 8 kube-system pods found
	I0110 02:49:48.270779  221603 system_pods.go:89] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:49:48.270787  221603 system_pods.go:89] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running
	I0110 02:49:48.270793  221603 system_pods.go:89] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running
	I0110 02:49:48.270803  221603 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running
	I0110 02:49:48.270808  221603 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running
	I0110 02:49:48.270816  221603 system_pods.go:89] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running
	I0110 02:49:48.270820  221603 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running
	I0110 02:49:48.270827  221603 system_pods.go:89] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:49:48.741393  221603 system_pods.go:86] 8 kube-system pods found
	I0110 02:49:48.741424  221603 system_pods.go:89] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:49:48.741431  221603 system_pods.go:89] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running
	I0110 02:49:48.741438  221603 system_pods.go:89] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running
	I0110 02:49:48.741443  221603 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running
	I0110 02:49:48.741448  221603 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running
	I0110 02:49:48.741452  221603 system_pods.go:89] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running
	I0110 02:49:48.741457  221603 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running
	I0110 02:49:48.741464  221603 system_pods.go:89] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:49:49.198944  221603 system_pods.go:86] 8 kube-system pods found
	I0110 02:49:49.198973  221603 system_pods.go:89] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Running
	I0110 02:49:49.198980  221603 system_pods.go:89] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running
	I0110 02:49:49.198987  221603 system_pods.go:89] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running
	I0110 02:49:49.198992  221603 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running
	I0110 02:49:49.198997  221603 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running
	I0110 02:49:49.199001  221603 system_pods.go:89] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running
	I0110 02:49:49.199006  221603 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running
	I0110 02:49:49.199010  221603 system_pods.go:89] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Running
	I0110 02:49:49.199017  221603 system_pods.go:126] duration metric: took 1.493406493s to wait for k8s-apps to be running ...
	I0110 02:49:49.199025  221603 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:49:49.199080  221603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:49:49.212882  221603 system_svc.go:56] duration metric: took 13.848142ms WaitForService to wait for kubelet
	I0110 02:49:49.212918  221603 kubeadm.go:587] duration metric: took 15.194907969s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:49:49.212963  221603 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:49:49.215938  221603 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:49:49.215966  221603 node_conditions.go:123] node cpu capacity is 2
	I0110 02:49:49.215982  221603 node_conditions.go:105] duration metric: took 3.011497ms to run NodePressure ...
	I0110 02:49:49.215995  221603 start.go:242] waiting for startup goroutines ...
	I0110 02:49:49.216002  221603 start.go:247] waiting for cluster config update ...
	I0110 02:49:49.216013  221603 start.go:256] writing updated cluster config ...
	I0110 02:49:49.216294  221603 ssh_runner.go:195] Run: rm -f paused
	I0110 02:49:49.222307  221603 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:49:49.226769  221603 pod_ready.go:83] waiting for pod "coredns-7d764666f9-sck2c" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.232583  221603 pod_ready.go:94] pod "coredns-7d764666f9-sck2c" is "Ready"
	I0110 02:49:49.232605  221603 pod_ready.go:86] duration metric: took 5.817167ms for pod "coredns-7d764666f9-sck2c" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.234917  221603 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.242710  221603 pod_ready.go:94] pod "etcd-default-k8s-diff-port-403885" is "Ready"
	I0110 02:49:49.242739  221603 pod_ready.go:86] duration metric: took 7.79565ms for pod "etcd-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.244995  221603 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.249365  221603 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-403885" is "Ready"
	I0110 02:49:49.249428  221603 pod_ready.go:86] duration metric: took 4.364446ms for pod "kube-apiserver-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.252107  221603 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.627051  221603 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-403885" is "Ready"
	I0110 02:49:49.627079  221603 pod_ready.go:86] duration metric: took 374.905825ms for pod "kube-controller-manager-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.826488  221603 pod_ready.go:83] waiting for pod "kube-proxy-ss9fs" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:50.226978  221603 pod_ready.go:94] pod "kube-proxy-ss9fs" is "Ready"
	I0110 02:49:50.227005  221603 pod_ready.go:86] duration metric: took 400.492397ms for pod "kube-proxy-ss9fs" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:50.426403  221603 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:50.826452  221603 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-403885" is "Ready"
	I0110 02:49:50.826483  221603 pod_ready.go:86] duration metric: took 400.056117ms for pod "kube-scheduler-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:50.826497  221603 pod_ready.go:40] duration metric: took 1.604160907s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:49:50.926905  221603 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 02:49:50.930644  221603 out.go:203] 
	W0110 02:49:50.934403  221603 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 02:49:50.937498  221603 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:49:50.941402  221603 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-403885" cluster and "default" namespace by default
	I0110 02:49:49.315667  227721 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:49.316004  227721 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0110 02:49:49.316019  227721 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:49:49.649599  227721 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:49:49.649624  227721 machine.go:97] duration metric: took 4.776867502s to provisionDockerMachine
	I0110 02:49:49.649636  227721 start.go:293] postStartSetup for "newest-cni-733680" (driver="docker")
	I0110 02:49:49.649646  227721 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:49:49.649726  227721 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:49:49.649768  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:49.669276  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:49.771913  227721 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:49:49.775059  227721 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:49:49.775090  227721 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:49:49.775102  227721 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:49:49.775155  227721 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:49:49.775244  227721 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:49:49.775348  227721 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:49:49.782784  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:49:49.799509  227721 start.go:296] duration metric: took 149.859431ms for postStartSetup
	I0110 02:49:49.799594  227721 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:49:49.799650  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:49.816376  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:49.916476  227721 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:49:49.920910  227721 fix.go:56] duration metric: took 5.379023632s for fixHost
	I0110 02:49:49.920936  227721 start.go:83] releasing machines lock for "newest-cni-733680", held for 5.379074707s
	I0110 02:49:49.921003  227721 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-733680
	I0110 02:49:49.937103  227721 ssh_runner.go:195] Run: cat /version.json
	I0110 02:49:49.937151  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:49.937169  227721 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:49:49.937220  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:49.956389  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:49.969424  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:50.165223  227721 ssh_runner.go:195] Run: systemctl --version
	I0110 02:49:50.171905  227721 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:49:50.208815  227721 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:49:50.213295  227721 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:49:50.213420  227721 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:49:50.221364  227721 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:49:50.221389  227721 start.go:496] detecting cgroup driver to use...
	I0110 02:49:50.221420  227721 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:49:50.221465  227721 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:49:50.237057  227721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:49:50.250249  227721 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:49:50.250330  227721 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:49:50.266528  227721 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:49:50.280285  227721 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:49:50.396685  227721 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:49:50.521003  227721 docker.go:234] disabling docker service ...
	I0110 02:49:50.521126  227721 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:49:50.540688  227721 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:49:50.556375  227721 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:49:50.683641  227721 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:49:50.808653  227721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:49:50.822460  227721 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:49:50.846923  227721 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:49:50.846995  227721 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.856884  227721 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:49:50.856953  227721 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.866475  227721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.875337  227721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.883923  227721 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:49:50.892014  227721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.901211  227721 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.909568  227721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.919195  227721 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:49:50.926804  227721 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:49:50.934752  227721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:49:51.133099  227721 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:49:51.350591  227721 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:49:51.350658  227721 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:49:51.354535  227721 start.go:574] Will wait 60s for crictl version
	I0110 02:49:51.354589  227721 ssh_runner.go:195] Run: which crictl
	I0110 02:49:51.358129  227721 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:49:51.386452  227721 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:49:51.386534  227721 ssh_runner.go:195] Run: crio --version
	I0110 02:49:51.413414  227721 ssh_runner.go:195] Run: crio --version
	I0110 02:49:51.446245  227721 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:49:51.449219  227721 cli_runner.go:164] Run: docker network inspect newest-cni-733680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:49:51.467434  227721 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:49:51.471068  227721 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:49:51.484080  227721 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0110 02:49:51.487038  227721 kubeadm.go:884] updating cluster {Name:newest-cni-733680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:49:51.487201  227721 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:49:51.487271  227721 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:49:51.530727  227721 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:49:51.530753  227721 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:49:51.530808  227721 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:49:51.559661  227721 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:49:51.559684  227721 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:49:51.559692  227721 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 02:49:51.559859  227721 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-733680 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:49:51.559960  227721 ssh_runner.go:195] Run: crio config
	I0110 02:49:51.649920  227721 cni.go:84] Creating CNI manager for ""
	I0110 02:49:51.649944  227721 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:49:51.649967  227721 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0110 02:49:51.649991  227721 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-733680 NodeName:newest-cni-733680 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:49:51.650121  227721 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-733680"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:49:51.650212  227721 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:49:51.657576  227721 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:49:51.657644  227721 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:49:51.664688  227721 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 02:49:51.677725  227721 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:49:51.691704  227721 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I0110 02:49:51.708568  227721 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:49:51.712042  227721 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:49:51.721970  227721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:49:51.852837  227721 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:49:51.873289  227721 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680 for IP: 192.168.76.2
	I0110 02:49:51.873311  227721 certs.go:195] generating shared ca certs ...
	I0110 02:49:51.873327  227721 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:51.873522  227721 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:49:51.873596  227721 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:49:51.873608  227721 certs.go:257] generating profile certs ...
	I0110 02:49:51.873727  227721 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/client.key
	I0110 02:49:51.873817  227721 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.key.aabe30f3
	I0110 02:49:51.873884  227721 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/proxy-client.key
	I0110 02:49:51.874016  227721 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:49:51.874066  227721 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:49:51.874083  227721 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:49:51.874130  227721 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:49:51.874180  227721 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:49:51.874225  227721 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:49:51.874306  227721 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:49:51.874941  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:49:51.892755  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:49:51.909930  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:49:51.927077  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:49:51.944783  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 02:49:51.962347  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 02:49:51.985129  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:49:52.012727  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 02:49:52.036283  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:49:52.069839  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:49:52.094664  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:49:52.127365  227721 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:49:52.145181  227721 ssh_runner.go:195] Run: openssl version
	I0110 02:49:52.151902  227721 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:52.160506  227721 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:49:52.168400  227721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:52.172018  227721 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:52.172119  227721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:52.214314  227721 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:49:52.222022  227721 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:49:52.229380  227721 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:49:52.236864  227721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:49:52.240516  227721 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:49:52.240627  227721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:49:52.281571  227721 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:49:52.289095  227721 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:49:52.296245  227721 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:49:52.303460  227721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:49:52.306912  227721 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:49:52.307011  227721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:49:52.349325  227721 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:49:52.356877  227721 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:49:52.360480  227721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:49:52.401232  227721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:49:52.442515  227721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:49:52.483877  227721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:49:52.525987  227721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:49:52.571183  227721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:49:52.629363  227721 kubeadm.go:401] StartCluster: {Name:newest-cni-733680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:49:52.629502  227721 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:49:52.629593  227721 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:49:52.707386  227721 cri.go:96] found id: "ea5078ffbb2ab2a6bbaa2c34fad84cb3a86c77b7c22d703c398c8f81af573168"
	I0110 02:49:52.707455  227721 cri.go:96] found id: "564644b28306f6bcee019ff3074f2c75514833f9bf107fbb9bc76939c4b892b7"
	I0110 02:49:52.707485  227721 cri.go:96] found id: "9ede343cb2e278e170c60426ec5533a577c1bde2bb263e32f4b5e061784803c7"
	I0110 02:49:52.707503  227721 cri.go:96] found id: "b2affd0966bb9dbc433519da5f2d62205ba805d51c020dd49252bd59427932dc"
	I0110 02:49:52.707529  227721 cri.go:96] found id: ""
	I0110 02:49:52.707627  227721 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 02:49:52.732021  227721 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:49:52Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:49:52.732136  227721 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:49:52.742812  227721 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:49:52.742890  227721 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:49:52.742967  227721 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:49:52.755410  227721 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:49:52.756033  227721 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-733680" does not appear in /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:49:52.756367  227721 kubeconfig.go:62] /home/jenkins/minikube-integration/22414-2353/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-733680" cluster setting kubeconfig missing "newest-cni-733680" context setting]
	I0110 02:49:52.756808  227721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:52.758406  227721 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:49:52.770028  227721 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 02:49:52.770062  227721 kubeadm.go:602] duration metric: took 27.152503ms to restartPrimaryControlPlane
	I0110 02:49:52.770090  227721 kubeadm.go:403] duration metric: took 140.747566ms to StartCluster
	I0110 02:49:52.770112  227721 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:52.770199  227721 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:49:52.771113  227721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:52.771349  227721 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:49:52.771738  227721 config.go:182] Loaded profile config "newest-cni-733680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:49:52.771722  227721 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:49:52.771844  227721 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-733680"
	I0110 02:49:52.771860  227721 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-733680"
	W0110 02:49:52.771866  227721 addons.go:248] addon storage-provisioner should already be in state true
	I0110 02:49:52.771875  227721 addons.go:70] Setting dashboard=true in profile "newest-cni-733680"
	I0110 02:49:52.771890  227721 host.go:66] Checking if "newest-cni-733680" exists ...
	I0110 02:49:52.771898  227721 addons.go:70] Setting default-storageclass=true in profile "newest-cni-733680"
	I0110 02:49:52.771909  227721 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-733680"
	I0110 02:49:52.772185  227721 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:52.772324  227721 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:52.771890  227721 addons.go:239] Setting addon dashboard=true in "newest-cni-733680"
	W0110 02:49:52.772704  227721 addons.go:248] addon dashboard should already be in state true
	I0110 02:49:52.772782  227721 host.go:66] Checking if "newest-cni-733680" exists ...
	I0110 02:49:52.773237  227721 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:52.777211  227721 out.go:179] * Verifying Kubernetes components...
	I0110 02:49:52.780387  227721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:49:52.826239  227721 addons.go:239] Setting addon default-storageclass=true in "newest-cni-733680"
	W0110 02:49:52.826259  227721 addons.go:248] addon default-storageclass should already be in state true
	I0110 02:49:52.826283  227721 host.go:66] Checking if "newest-cni-733680" exists ...
	I0110 02:49:52.826683  227721 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:52.834786  227721 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:49:52.838077  227721 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:49:52.838103  227721 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:49:52.838172  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:52.850062  227721 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 02:49:52.853327  227721 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 02:49:52.858599  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 02:49:52.858622  227721 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 02:49:52.858689  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:52.875042  227721 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:49:52.875065  227721 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:49:52.875142  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:52.895967  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:52.918908  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:52.923631  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:53.067602  227721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:49:53.205587  227721 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:49:53.237603  227721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:49:53.240836  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 02:49:53.240858  227721 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 02:49:53.295176  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 02:49:53.295248  227721 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 02:49:53.350797  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 02:49:53.350872  227721 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 02:49:53.411170  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 02:49:53.411250  227721 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 02:49:53.461048  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 02:49:53.461124  227721 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 02:49:53.492736  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 02:49:53.492809  227721 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 02:49:53.513750  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 02:49:53.513845  227721 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 02:49:53.556492  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 02:49:53.556562  227721 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 02:49:53.581438  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:49:53.581506  227721 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 02:49:53.613630  227721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:49:57.046329  227721 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.978687072s)
	I0110 02:49:57.046422  227721 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.84080946s)
	I0110 02:49:57.046637  227721 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:49:57.046449  227721 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.808826332s)
	I0110 02:49:57.046542  227721 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.432840606s)
	I0110 02:49:57.047187  227721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:49:57.050038  227721 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-733680 addons enable metrics-server
	
	I0110 02:49:57.071486  227721 api_server.go:72] duration metric: took 4.300106932s to wait for apiserver process to appear ...
	I0110 02:49:57.071509  227721 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:49:57.071527  227721 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:49:57.080089  227721 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 02:49:57.080120  227721 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 02:49:57.093756  227721 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 02:49:57.096511  227721 addons.go:530] duration metric: took 4.324781462s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 02:49:57.572427  227721 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:49:57.586847  227721 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:49:57.587967  227721 api_server.go:141] control plane version: v1.35.0
	I0110 02:49:57.588028  227721 api_server.go:131] duration metric: took 516.494969ms to wait for apiserver health ...
	I0110 02:49:57.588047  227721 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:49:57.592332  227721 system_pods.go:59] 8 kube-system pods found
	I0110 02:49:57.592387  227721 system_pods.go:61] "coredns-7d764666f9-7djps" [5b40a1e2-6d92-4e33-8a94-19e45bc18937] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 02:49:57.592406  227721 system_pods.go:61] "etcd-newest-cni-733680" [15903ffb-9c75-402b-aaf1-2ea433e993a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:49:57.592415  227721 system_pods.go:61] "kindnet-bnwfz" [49fa87e1-f1d9-4315-918f-b079caade618] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 02:49:57.592429  227721 system_pods.go:61] "kube-apiserver-newest-cni-733680" [7daa7cb6-3a2f-4d11-aadd-c7aab970ff4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:49:57.592437  227721 system_pods.go:61] "kube-controller-manager-newest-cni-733680" [f5a37a0a-47e7-41e5-a01f-16efb8c43166] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:49:57.592476  227721 system_pods.go:61] "kube-proxy-mnr64" [0f5cc6b8-6364-4711-a838-fb70b057c4ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 02:49:57.592490  227721 system_pods.go:61] "kube-scheduler-newest-cni-733680" [d5f0e1ba-9a06-4172-a5ec-140a553a47ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:49:57.592502  227721 system_pods.go:61] "storage-provisioner" [1a9287f6-2918-4098-8444-2f1c2c4dda71] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 02:49:57.592514  227721 system_pods.go:74] duration metric: took 4.460764ms to wait for pod list to return data ...
	I0110 02:49:57.592543  227721 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:49:57.594834  227721 default_sa.go:45] found service account: "default"
	I0110 02:49:57.594857  227721 default_sa.go:55] duration metric: took 2.300878ms for default service account to be created ...
	I0110 02:49:57.594870  227721 kubeadm.go:587] duration metric: took 4.823494248s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:49:57.594915  227721 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:49:57.597527  227721 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:49:57.597564  227721 node_conditions.go:123] node cpu capacity is 2
	I0110 02:49:57.597577  227721 node_conditions.go:105] duration metric: took 2.655954ms to run NodePressure ...
	I0110 02:49:57.597610  227721 start.go:242] waiting for startup goroutines ...
	I0110 02:49:57.597619  227721 start.go:247] waiting for cluster config update ...
	I0110 02:49:57.597664  227721 start.go:256] writing updated cluster config ...
	I0110 02:49:57.597984  227721 ssh_runner.go:195] Run: rm -f paused
	I0110 02:49:57.679374  227721 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 02:49:57.682549  227721 out.go:203] 
	W0110 02:49:57.685626  227721 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 02:49:57.690644  227721 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:49:57.693501  227721 out.go:179] * Done! kubectl is now configured to use "newest-cni-733680" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.332502905Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.341422332Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=fd1f185d-34fd-4fd6-ad98-ed5fb7ae718a name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.341770097Z" level=info msg="Running pod sandbox: kube-system/kindnet-bnwfz/POD" id=803b57a9-2332-4933-ad48-7d3fb61240aa name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.341842013Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.345130369Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=803b57a9-2332-4933-ad48-7d3fb61240aa name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.348856597Z" level=info msg="Ran pod sandbox 4e5a829f07b47389342609b465c7f583b2849a5bf78f39ea7a091a61a15c1752 with infra container: kube-system/kindnet-bnwfz/POD" id=803b57a9-2332-4933-ad48-7d3fb61240aa name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.353020253Z" level=info msg="Ran pod sandbox 9502b267985b589357f22c19d88e434650d9c3289a00a07ea2b1f8dc5a5e8e6f with infra container: kube-system/kube-proxy-mnr64/POD" id=fd1f185d-34fd-4fd6-ad98-ed5fb7ae718a name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.35615082Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=bec97807-8c02-4b7f-9073-f8aeff2fc48f name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.356422033Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=273f83ee-ec4a-4747-84a9-9d09fe3bcd07 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.358783522Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=aafba74b-c63d-42a0-8291-75fbe9eeefcc name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.358931571Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=b0849b49-f4f0-426c-ba68-0924473e9aad name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.360167772Z" level=info msg="Creating container: kube-system/kube-proxy-mnr64/kube-proxy" id=e1d3fb54-374c-4544-86f5-03a639cb5c37 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.360341388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.360191148Z" level=info msg="Creating container: kube-system/kindnet-bnwfz/kindnet-cni" id=4a17c2df-916a-4634-aeda-f9032ddda821 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.360585246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.365687199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.366183391Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.36824412Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.368879303Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.385414767Z" level=info msg="Created container f3115642cb9635a0a0fb487d418fee931ed516febc3248ca4c87c80d330b4a87: kube-system/kindnet-bnwfz/kindnet-cni" id=4a17c2df-916a-4634-aeda-f9032ddda821 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.388410454Z" level=info msg="Created container d50472bf13056b006d7be9a47a3b5f1710e956b40dc97490122f62498e3e741b: kube-system/kube-proxy-mnr64/kube-proxy" id=e1d3fb54-374c-4544-86f5-03a639cb5c37 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.389520021Z" level=info msg="Starting container: f3115642cb9635a0a0fb487d418fee931ed516febc3248ca4c87c80d330b4a87" id=a0b58415-9b71-44cf-ab92-ace098aef13f name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.389760646Z" level=info msg="Starting container: d50472bf13056b006d7be9a47a3b5f1710e956b40dc97490122f62498e3e741b" id=a3be81b8-c114-4e1f-9eb2-980ddda21736 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.391417669Z" level=info msg="Started container" PID=1069 containerID=f3115642cb9635a0a0fb487d418fee931ed516febc3248ca4c87c80d330b4a87 description=kube-system/kindnet-bnwfz/kindnet-cni id=a0b58415-9b71-44cf-ab92-ace098aef13f name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e5a829f07b47389342609b465c7f583b2849a5bf78f39ea7a091a61a15c1752
	Jan 10 02:49:57 newest-cni-733680 crio[615]: time="2026-01-10T02:49:57.395209371Z" level=info msg="Started container" PID=1073 containerID=d50472bf13056b006d7be9a47a3b5f1710e956b40dc97490122f62498e3e741b description=kube-system/kube-proxy-mnr64/kube-proxy id=a3be81b8-c114-4e1f-9eb2-980ddda21736 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9502b267985b589357f22c19d88e434650d9c3289a00a07ea2b1f8dc5a5e8e6f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d50472bf13056       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5   7 seconds ago       Running             kube-proxy                1                   9502b267985b5       kube-proxy-mnr64                            kube-system
	f3115642cb963       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13   7 seconds ago       Running             kindnet-cni               1                   4e5a829f07b47       kindnet-bnwfz                               kube-system
	ea5078ffbb2ab       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0   11 seconds ago      Running             kube-controller-manager   1                   78b4c2ed69206       kube-controller-manager-newest-cni-733680   kube-system
	564644b28306f       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57   11 seconds ago      Running             etcd                      1                   ad62c517f63a6       etcd-newest-cni-733680                      kube-system
	9ede343cb2e27       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f   11 seconds ago      Running             kube-scheduler            1                   78db093b1df3c       kube-scheduler-newest-cni-733680            kube-system
	b2affd0966bb9       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856   12 seconds ago      Running             kube-apiserver            1                   a35b014e338ff       kube-apiserver-newest-cni-733680            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-733680
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-733680
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=newest-cni-733680
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_49_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:49:30 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-733680
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:49:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:49:56 +0000   Sat, 10 Jan 2026 02:49:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:49:56 +0000   Sat, 10 Jan 2026 02:49:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:49:56 +0000   Sat, 10 Jan 2026 02:49:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 10 Jan 2026 02:49:56 +0000   Sat, 10 Jan 2026 02:49:26 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-733680
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                c296da09-1974-40e8-a0f3-9e9b1313e8dc
	  Boot ID:                    41f52236-76b9-4dc8-bfe3-121fbdeb9659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-733680                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-bnwfz                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-733680             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-733680    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-mnr64                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-733680             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node newest-cni-733680 event: Registered Node newest-cni-733680 in Controller
	  Normal  RegisteredNode  5s    node-controller  Node newest-cni-733680 event: Registered Node newest-cni-733680 in Controller
	
	
	==> dmesg <==
	[Jan10 02:17] overlayfs: idmapped layers are currently not supported
	[Jan10 02:18] overlayfs: idmapped layers are currently not supported
	[ +23.212409] overlayfs: idmapped layers are currently not supported
	[  +9.866169] overlayfs: idmapped layers are currently not supported
	[Jan10 02:19] overlayfs: idmapped layers are currently not supported
	[Jan10 02:20] overlayfs: idmapped layers are currently not supported
	[ +35.960240] overlayfs: idmapped layers are currently not supported
	[Jan10 02:21] overlayfs: idmapped layers are currently not supported
	[Jan10 02:23] overlayfs: idmapped layers are currently not supported
	[Jan10 02:24] overlayfs: idmapped layers are currently not supported
	[Jan10 02:25] overlayfs: idmapped layers are currently not supported
	[Jan10 02:30] overlayfs: idmapped layers are currently not supported
	[Jan10 02:31] overlayfs: idmapped layers are currently not supported
	[Jan10 02:35] overlayfs: idmapped layers are currently not supported
	[Jan10 02:37] overlayfs: idmapped layers are currently not supported
	[Jan10 02:41] overlayfs: idmapped layers are currently not supported
	[ +37.534021] overlayfs: idmapped layers are currently not supported
	[Jan10 02:43] overlayfs: idmapped layers are currently not supported
	[Jan10 02:44] overlayfs: idmapped layers are currently not supported
	[Jan10 02:45] overlayfs: idmapped layers are currently not supported
	[Jan10 02:46] overlayfs: idmapped layers are currently not supported
	[Jan10 02:48] overlayfs: idmapped layers are currently not supported
	[Jan10 02:49] overlayfs: idmapped layers are currently not supported
	[  +4.690964] overlayfs: idmapped layers are currently not supported
	[ +26.361261] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [564644b28306f6bcee019ff3074f2c75514833f9bf107fbb9bc76939c4b892b7] <==
	{"level":"info","ts":"2026-01-10T02:49:53.142290Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:49:53.142384Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T02:49:53.147912Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:49:53.147949Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T02:49:53.148965Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T02:49:53.149043Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:49:53.149117Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T02:49:53.551836Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:49:53.551967Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:49:53.552048Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T02:49:53.552089Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:49:53.552132Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:49:53.556690Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:49:53.556781Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:49:53.556827Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:49:53.556864Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T02:49:53.559546Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:newest-cni-733680 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:49:53.562361Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:49:53.563358Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:49:53.581904Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:49:53.582061Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:49:53.582074Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:49:53.582721Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:49:53.646788Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T02:49:53.656609Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 02:50:04 up  1:32,  0 user,  load average: 3.94, 2.73, 2.16
	Linux newest-cni-733680 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f3115642cb9635a0a0fb487d418fee931ed516febc3248ca4c87c80d330b4a87] <==
	I0110 02:49:57.525787       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:49:57.526315       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 02:49:57.526458       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:49:57.526481       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:49:57.526492       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:49:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:49:57.725282       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:49:57.725303       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:49:57.725318       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:49:57.726454       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [b2affd0966bb9dbc433519da5f2d62205ba805d51c020dd49252bd59427932dc] <==
	I0110 02:49:56.047699       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 02:49:56.048501       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 02:49:56.054353       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:56.054381       1 policy_source.go:248] refreshing policies
	I0110 02:49:56.057701       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:49:56.057735       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:56.057785       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 02:49:56.058371       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 02:49:56.100310       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 02:49:56.116677       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 02:49:56.146696       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:56.197136       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E0110 02:49:56.202146       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 02:49:56.648975       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:49:56.702086       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:49:56.707092       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:49:56.748366       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:49:56.765966       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:49:56.779979       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:49:56.855430       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.222.248"}
	I0110 02:49:56.893053       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.185.9"}
	I0110 02:49:59.321962       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:49:59.521900       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:49:59.572260       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:49:59.722185       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [ea5078ffbb2ab2a6bbaa2c34fad84cb3a86c77b7c22d703c398c8f81af573168] <==
	I0110 02:49:59.001281       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.001329       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.001426       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.001549       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.001566       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.001606       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.001639       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.007543       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.007639       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.007696       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.007717       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.007956       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.008053       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 02:49:59.008145       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-733680"
	I0110 02:49:59.008202       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0110 02:49:59.008224       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.008264       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.008310       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.008554       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.019101       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.036858       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.083392       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.089781       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:59.089917       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:49:59.089949       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [d50472bf13056b006d7be9a47a3b5f1710e956b40dc97490122f62498e3e741b] <==
	I0110 02:49:57.441356       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:49:57.527020       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:49:57.628298       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:57.628414       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 02:49:57.628540       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:49:57.652737       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:49:57.652798       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:49:57.656470       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:49:57.656873       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:49:57.656893       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:49:57.661779       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:49:57.661849       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:49:57.662162       1 config.go:200] "Starting service config controller"
	I0110 02:49:57.662222       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:49:57.662566       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:49:57.662607       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:49:57.663046       1 config.go:309] "Starting node config controller"
	I0110 02:49:57.663090       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:49:57.663119       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:49:57.763929       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:49:57.763975       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 02:49:57.764013       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [9ede343cb2e278e170c60426ec5533a577c1bde2bb263e32f4b5e061784803c7] <==
	I0110 02:49:54.173375       1 serving.go:386] Generated self-signed cert in-memory
	W0110 02:49:55.909645       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 02:49:55.909752       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 02:49:55.909787       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 02:49:55.909828       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 02:49:56.064297       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 02:49:56.064333       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:49:56.066695       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 02:49:56.066827       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:49:56.066837       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:49:56.066853       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 02:49:56.266907       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: I0110 02:49:56.220360     736 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-733680"
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: I0110 02:49:56.220419     736 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: I0110 02:49:56.222169     736 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: E0110 02:49:56.249718     736 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-733680\" already exists" pod="kube-system/etcd-newest-cni-733680"
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: I0110 02:49:56.249752     736 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-733680"
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: E0110 02:49:56.273485     736 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-733680\" already exists" pod="kube-system/kube-apiserver-newest-cni-733680"
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: I0110 02:49:56.273528     736 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-733680"
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: E0110 02:49:56.298682     736 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-733680\" already exists" pod="kube-system/kube-controller-manager-newest-cni-733680"
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: I0110 02:49:56.298727     736 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-733680"
	Jan 10 02:49:56 newest-cni-733680 kubelet[736]: E0110 02:49:56.312314     736 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-733680\" already exists" pod="kube-system/kube-scheduler-newest-cni-733680"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: I0110 02:49:57.002345     736 apiserver.go:52] "Watching apiserver"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: E0110 02:49:57.026013     736 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-733680" containerName="etcd"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: E0110 02:49:57.026526     736 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-733680" containerName="kube-scheduler"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: E0110 02:49:57.026928     736 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-733680" containerName="kube-apiserver"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: E0110 02:49:57.027330     736 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-733680" containerName="kube-controller-manager"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: I0110 02:49:57.112096     736 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: I0110 02:49:57.193186     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/49fa87e1-f1d9-4315-918f-b079caade618-cni-cfg\") pod \"kindnet-bnwfz\" (UID: \"49fa87e1-f1d9-4315-918f-b079caade618\") " pod="kube-system/kindnet-bnwfz"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: I0110 02:49:57.193387     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49fa87e1-f1d9-4315-918f-b079caade618-xtables-lock\") pod \"kindnet-bnwfz\" (UID: \"49fa87e1-f1d9-4315-918f-b079caade618\") " pod="kube-system/kindnet-bnwfz"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: I0110 02:49:57.193506     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f5cc6b8-6364-4711-a838-fb70b057c4ef-xtables-lock\") pod \"kube-proxy-mnr64\" (UID: \"0f5cc6b8-6364-4711-a838-fb70b057c4ef\") " pod="kube-system/kube-proxy-mnr64"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: I0110 02:49:57.193599     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f5cc6b8-6364-4711-a838-fb70b057c4ef-lib-modules\") pod \"kube-proxy-mnr64\" (UID: \"0f5cc6b8-6364-4711-a838-fb70b057c4ef\") " pod="kube-system/kube-proxy-mnr64"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: I0110 02:49:57.193703     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49fa87e1-f1d9-4315-918f-b079caade618-lib-modules\") pod \"kindnet-bnwfz\" (UID: \"49fa87e1-f1d9-4315-918f-b079caade618\") " pod="kube-system/kindnet-bnwfz"
	Jan 10 02:49:57 newest-cni-733680 kubelet[736]: I0110 02:49:57.204794     736 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jan 10 02:49:58 newest-cni-733680 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:49:58 newest-cni-733680 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:49:58 newest-cni-733680 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-733680 -n newest-cni-733680
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-733680 -n newest-cni-733680: exit status 2 (468.35724ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-733680 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-7djps storage-provisioner dashboard-metrics-scraper-867fb5f87b-vxw7r kubernetes-dashboard-b84665fb8-kqjbx
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-733680 describe pod coredns-7d764666f9-7djps storage-provisioner dashboard-metrics-scraper-867fb5f87b-vxw7r kubernetes-dashboard-b84665fb8-kqjbx
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-733680 describe pod coredns-7d764666f9-7djps storage-provisioner dashboard-metrics-scraper-867fb5f87b-vxw7r kubernetes-dashboard-b84665fb8-kqjbx: exit status 1 (83.871315ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-7djps" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-vxw7r" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-kqjbx" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-733680 describe pod coredns-7d764666f9-7djps storage-provisioner dashboard-metrics-scraper-867fb5f87b-vxw7r kubernetes-dashboard-b84665fb8-kqjbx: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-403885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-403885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (342.344078ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:50:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-403885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-403885 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-403885 describe deploy/metrics-server -n kube-system: exit status 1 (180.170304ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-403885 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-403885
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-403885:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82",
	        "Created": "2026-01-10T02:49:07.169528975Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 222538,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:49:07.25869663Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82/hostname",
	        "HostsPath": "/var/lib/docker/containers/68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82/hosts",
	        "LogPath": "/var/lib/docker/containers/68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82/68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82-json.log",
	        "Name": "/default-k8s-diff-port-403885",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-403885:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-403885",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82",
	                "LowerDir": "/var/lib/docker/overlay2/d01d645ede148d5aa173ad4f705d7224af0ded80634c399615f32176e47ffff2-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d01d645ede148d5aa173ad4f705d7224af0ded80634c399615f32176e47ffff2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d01d645ede148d5aa173ad4f705d7224af0ded80634c399615f32176e47ffff2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d01d645ede148d5aa173ad4f705d7224af0ded80634c399615f32176e47ffff2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-403885",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-403885/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-403885",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-403885",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-403885",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "99ab946e0384fc5ed719a5212da636a69a472090b7c507cc233c48ffb8b022f6",
	            "SandboxKey": "/var/run/docker/netns/99ab946e0384",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-403885": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:ec:c8:aa:a6:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "634ff5f67b9d8e0ced560118940fddc59ecaca247334cc034944724496472f4d",
	                    "EndpointID": "f7025e5fb32eb7e7e62d6e70443e0811ff6582ecbabe748c0d01ec9deda3dfa4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-403885",
	                        "68becb0d3e52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-403885 -n default-k8s-diff-port-403885
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-403885 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-403885 logs -n 25: (1.513192797s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ embed-certs-290628 image list --format=json                                                                                                                                                                                                   │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ pause   │ -p embed-certs-290628 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │                     │
	│ delete  │ -p embed-certs-290628                                                                                                                                                                                                                         │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ delete  │ -p embed-certs-290628                                                                                                                                                                                                                         │ embed-certs-290628           │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ delete  │ -p disable-driver-mounts-990753                                                                                                                                                                                                               │ disable-driver-mounts-990753 │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:46 UTC │
	│ start   │ -p no-preload-676905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:46 UTC │ 10 Jan 26 02:47 UTC │
	│ addons  │ enable metrics-server -p no-preload-676905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │                     │
	│ stop    │ -p no-preload-676905 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:47 UTC │
	│ addons  │ enable dashboard -p no-preload-676905 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:47 UTC │
	│ start   │ -p no-preload-676905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:47 UTC │ 10 Jan 26 02:48 UTC │
	│ image   │ no-preload-676905 image list --format=json                                                                                                                                                                                                    │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │ 10 Jan 26 02:48 UTC │
	│ pause   │ -p no-preload-676905 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │                     │
	│ ssh     │ force-systemd-flag-038359 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-038359    │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │ 10 Jan 26 02:48 UTC │
	│ delete  │ -p force-systemd-flag-038359                                                                                                                                                                                                                  │ force-systemd-flag-038359    │ jenkins │ v1.37.0 │ 10 Jan 26 02:48 UTC │ 10 Jan 26 02:49 UTC │
	│ start   │ -p default-k8s-diff-port-403885 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-403885 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ delete  │ -p no-preload-676905                                                                                                                                                                                                                          │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ delete  │ -p no-preload-676905                                                                                                                                                                                                                          │ no-preload-676905            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ start   │ -p newest-cni-733680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ addons  │ enable metrics-server -p newest-cni-733680 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │                     │
	│ stop    │ -p newest-cni-733680 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ addons  │ enable dashboard -p newest-cni-733680 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ start   │ -p newest-cni-733680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ image   │ newest-cni-733680 image list --format=json                                                                                                                                                                                                    │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ pause   │ -p newest-cni-733680 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-733680            │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-403885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-403885 │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:49:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:49:44.309497  227721 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:49:44.309694  227721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:49:44.309721  227721 out.go:374] Setting ErrFile to fd 2...
	I0110 02:49:44.309742  227721 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:49:44.310463  227721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:49:44.310909  227721 out.go:368] Setting JSON to false
	I0110 02:49:44.311837  227721 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5534,"bootTime":1768007851,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:49:44.311908  227721 start.go:143] virtualization:  
	I0110 02:49:44.314988  227721 out.go:179] * [newest-cni-733680] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:49:44.318901  227721 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:49:44.318981  227721 notify.go:221] Checking for updates...
	I0110 02:49:44.324985  227721 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:49:44.328078  227721 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:49:44.331185  227721 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:49:44.334208  227721 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:49:44.338487  227721 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:49:44.341862  227721 config.go:182] Loaded profile config "newest-cni-733680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:49:44.342521  227721 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:49:44.367410  227721 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:49:44.367524  227721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:49:44.444412  227721 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:49:44.433211036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:49:44.444531  227721 docker.go:319] overlay module found
	I0110 02:49:44.447876  227721 out.go:179] * Using the docker driver based on existing profile
	I0110 02:49:44.451345  227721 start.go:309] selected driver: docker
	I0110 02:49:44.451366  227721 start.go:928] validating driver "docker" against &{Name:newest-cni-733680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:49:44.451475  227721 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:49:44.452206  227721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:49:44.507205  227721 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:49:44.498529625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:49:44.507559  227721 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:49:44.507615  227721 cni.go:84] Creating CNI manager for ""
	I0110 02:49:44.507673  227721 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:49:44.507711  227721 start.go:353] cluster config:
	{Name:newest-cni-733680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:49:44.512606  227721 out.go:179] * Starting "newest-cni-733680" primary control-plane node in "newest-cni-733680" cluster
	I0110 02:49:44.515426  227721 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:49:44.518453  227721 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:49:44.521364  227721 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:49:44.521413  227721 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 02:49:44.521426  227721 cache.go:65] Caching tarball of preloaded images
	I0110 02:49:44.521435  227721 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:49:44.521506  227721 preload.go:251] Found /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 02:49:44.521516  227721 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:49:44.521629  227721 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/config.json ...
	I0110 02:49:44.541711  227721 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:49:44.541735  227721 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:49:44.541750  227721 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:49:44.541787  227721 start.go:360] acquireMachinesLock for newest-cni-733680: {Name:mkffafc06373cf7d630e08f2554eaef3a62ff5c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:49:44.541849  227721 start.go:364] duration metric: took 34.297µs to acquireMachinesLock for "newest-cni-733680"
	I0110 02:49:44.541873  227721 start.go:96] Skipping create...Using existing machine configuration
	I0110 02:49:44.541886  227721 fix.go:54] fixHost starting: 
	I0110 02:49:44.542210  227721 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:44.558798  227721 fix.go:112] recreateIfNeeded on newest-cni-733680: state=Stopped err=<nil>
	W0110 02:49:44.558830  227721 fix.go:138] unexpected machine state, will restart: <nil>
	W0110 02:49:44.679057  221603 node_ready.go:57] node "default-k8s-diff-port-403885" has "Ready":"False" status (will retry)
	W0110 02:49:47.175918  221603 node_ready.go:57] node "default-k8s-diff-port-403885" has "Ready":"False" status (will retry)
	I0110 02:49:44.562180  227721 out.go:252] * Restarting existing docker container for "newest-cni-733680" ...
	I0110 02:49:44.562266  227721 cli_runner.go:164] Run: docker start newest-cni-733680
	I0110 02:49:44.826055  227721 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:44.849274  227721 kic.go:430] container "newest-cni-733680" state is running.
	I0110 02:49:44.849680  227721 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-733680
	I0110 02:49:44.872507  227721 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/config.json ...
	I0110 02:49:44.872747  227721 machine.go:94] provisionDockerMachine start ...
	I0110 02:49:44.872811  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:44.892850  227721 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:44.893319  227721 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0110 02:49:44.893332  227721 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:49:44.893964  227721 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0110 02:49:48.067572  227721 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-733680
	
	I0110 02:49:48.067638  227721 ubuntu.go:182] provisioning hostname "newest-cni-733680"
	I0110 02:49:48.067727  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:48.098348  227721 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:48.098657  227721 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0110 02:49:48.098670  227721 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-733680 && echo "newest-cni-733680" | sudo tee /etc/hostname
	I0110 02:49:48.261145  227721 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-733680
	
	I0110 02:49:48.261217  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:48.281566  227721 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:48.281886  227721 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0110 02:49:48.281901  227721 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-733680' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-733680/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-733680' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:49:48.427922  227721 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:49:48.428009  227721 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 02:49:48.428044  227721 ubuntu.go:190] setting up certificates
	I0110 02:49:48.428085  227721 provision.go:84] configureAuth start
	I0110 02:49:48.428173  227721 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-733680
	I0110 02:49:48.444728  227721 provision.go:143] copyHostCerts
	I0110 02:49:48.444802  227721 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem, removing ...
	I0110 02:49:48.444824  227721 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:49:48.444902  227721 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 02:49:48.445010  227721 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem, removing ...
	I0110 02:49:48.445020  227721 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:49:48.445048  227721 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 02:49:48.445111  227721 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem, removing ...
	I0110 02:49:48.445119  227721 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:49:48.445144  227721 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 02:49:48.445221  227721 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.newest-cni-733680 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-733680]
	I0110 02:49:49.101821  227721 provision.go:177] copyRemoteCerts
	I0110 02:49:49.101923  227721 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:49:49.101988  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:49.119449  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:49.225157  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 02:49:49.258065  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:49:49.279139  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:49:49.297184  227721 provision.go:87] duration metric: took 869.064424ms to configureAuth
	I0110 02:49:49.297209  227721 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:49:49.297398  227721 config.go:182] Loaded profile config "newest-cni-733680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:49:49.297505  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:47.676475  221603 node_ready.go:49] node "default-k8s-diff-port-403885" is "Ready"
	I0110 02:49:47.676508  221603 node_ready.go:38] duration metric: took 12.003635661s for node "default-k8s-diff-port-403885" to be "Ready" ...
	I0110 02:49:47.676521  221603 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:49:47.676580  221603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:49:47.689566  221603 api_server.go:72] duration metric: took 13.671553329s to wait for apiserver process to appear ...
	I0110 02:49:47.689593  221603 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:49:47.689613  221603 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0110 02:49:47.698429  221603 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0110 02:49:47.699635  221603 api_server.go:141] control plane version: v1.35.0
	I0110 02:49:47.699659  221603 api_server.go:131] duration metric: took 10.058794ms to wait for apiserver health ...
	I0110 02:49:47.699668  221603 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:49:47.702731  221603 system_pods.go:59] 8 kube-system pods found
	I0110 02:49:47.702773  221603 system_pods.go:61] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:49:47.702819  221603 system_pods.go:61] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running
	I0110 02:49:47.702828  221603 system_pods.go:61] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running
	I0110 02:49:47.702839  221603 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running
	I0110 02:49:47.702845  221603 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running
	I0110 02:49:47.702852  221603 system_pods.go:61] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running
	I0110 02:49:47.702875  221603 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running
	I0110 02:49:47.702889  221603 system_pods.go:61] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:49:47.702904  221603 system_pods.go:74] duration metric: took 3.222266ms to wait for pod list to return data ...
	I0110 02:49:47.702917  221603 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:49:47.705574  221603 default_sa.go:45] found service account: "default"
	I0110 02:49:47.705597  221603 default_sa.go:55] duration metric: took 2.67266ms for default service account to be created ...
	I0110 02:49:47.705606  221603 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:49:47.708460  221603 system_pods.go:86] 8 kube-system pods found
	I0110 02:49:47.708495  221603 system_pods.go:89] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:49:47.708502  221603 system_pods.go:89] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running
	I0110 02:49:47.708509  221603 system_pods.go:89] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running
	I0110 02:49:47.708514  221603 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running
	I0110 02:49:47.708519  221603 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running
	I0110 02:49:47.708545  221603 system_pods.go:89] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running
	I0110 02:49:47.708556  221603 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running
	I0110 02:49:47.708563  221603 system_pods.go:89] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:49:47.708593  221603 retry.go:84] will retry after 200ms: missing components: kube-dns
	I0110 02:49:47.905691  221603 system_pods.go:86] 8 kube-system pods found
	I0110 02:49:47.905730  221603 system_pods.go:89] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:49:47.905763  221603 system_pods.go:89] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running
	I0110 02:49:47.905778  221603 system_pods.go:89] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running
	I0110 02:49:47.905784  221603 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running
	I0110 02:49:47.905789  221603 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running
	I0110 02:49:47.905797  221603 system_pods.go:89] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running
	I0110 02:49:47.905802  221603 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running
	I0110 02:49:47.905814  221603 system_pods.go:89] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:49:48.270742  221603 system_pods.go:86] 8 kube-system pods found
	I0110 02:49:48.270779  221603 system_pods.go:89] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:49:48.270787  221603 system_pods.go:89] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running
	I0110 02:49:48.270793  221603 system_pods.go:89] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running
	I0110 02:49:48.270803  221603 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running
	I0110 02:49:48.270808  221603 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running
	I0110 02:49:48.270816  221603 system_pods.go:89] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running
	I0110 02:49:48.270820  221603 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running
	I0110 02:49:48.270827  221603 system_pods.go:89] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:49:48.741393  221603 system_pods.go:86] 8 kube-system pods found
	I0110 02:49:48.741424  221603 system_pods.go:89] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:49:48.741431  221603 system_pods.go:89] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running
	I0110 02:49:48.741438  221603 system_pods.go:89] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running
	I0110 02:49:48.741443  221603 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running
	I0110 02:49:48.741448  221603 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running
	I0110 02:49:48.741452  221603 system_pods.go:89] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running
	I0110 02:49:48.741457  221603 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running
	I0110 02:49:48.741464  221603 system_pods.go:89] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:49:49.198944  221603 system_pods.go:86] 8 kube-system pods found
	I0110 02:49:49.198973  221603 system_pods.go:89] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Running
	I0110 02:49:49.198980  221603 system_pods.go:89] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running
	I0110 02:49:49.198987  221603 system_pods.go:89] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running
	I0110 02:49:49.198992  221603 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running
	I0110 02:49:49.198997  221603 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running
	I0110 02:49:49.199001  221603 system_pods.go:89] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running
	I0110 02:49:49.199006  221603 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running
	I0110 02:49:49.199010  221603 system_pods.go:89] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Running
	I0110 02:49:49.199017  221603 system_pods.go:126] duration metric: took 1.493406493s to wait for k8s-apps to be running ...
	I0110 02:49:49.199025  221603 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:49:49.199080  221603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:49:49.212882  221603 system_svc.go:56] duration metric: took 13.848142ms WaitForService to wait for kubelet
	I0110 02:49:49.212918  221603 kubeadm.go:587] duration metric: took 15.194907969s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:49:49.212963  221603 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:49:49.215938  221603 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:49:49.215966  221603 node_conditions.go:123] node cpu capacity is 2
	I0110 02:49:49.215982  221603 node_conditions.go:105] duration metric: took 3.011497ms to run NodePressure ...
	I0110 02:49:49.215995  221603 start.go:242] waiting for startup goroutines ...
	I0110 02:49:49.216002  221603 start.go:247] waiting for cluster config update ...
	I0110 02:49:49.216013  221603 start.go:256] writing updated cluster config ...
	I0110 02:49:49.216294  221603 ssh_runner.go:195] Run: rm -f paused
	I0110 02:49:49.222307  221603 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:49:49.226769  221603 pod_ready.go:83] waiting for pod "coredns-7d764666f9-sck2c" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.232583  221603 pod_ready.go:94] pod "coredns-7d764666f9-sck2c" is "Ready"
	I0110 02:49:49.232605  221603 pod_ready.go:86] duration metric: took 5.817167ms for pod "coredns-7d764666f9-sck2c" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.234917  221603 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.242710  221603 pod_ready.go:94] pod "etcd-default-k8s-diff-port-403885" is "Ready"
	I0110 02:49:49.242739  221603 pod_ready.go:86] duration metric: took 7.79565ms for pod "etcd-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.244995  221603 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.249365  221603 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-403885" is "Ready"
	I0110 02:49:49.249428  221603 pod_ready.go:86] duration metric: took 4.364446ms for pod "kube-apiserver-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.252107  221603 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.627051  221603 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-403885" is "Ready"
	I0110 02:49:49.627079  221603 pod_ready.go:86] duration metric: took 374.905825ms for pod "kube-controller-manager-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:49.826488  221603 pod_ready.go:83] waiting for pod "kube-proxy-ss9fs" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:50.226978  221603 pod_ready.go:94] pod "kube-proxy-ss9fs" is "Ready"
	I0110 02:49:50.227005  221603 pod_ready.go:86] duration metric: took 400.492397ms for pod "kube-proxy-ss9fs" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:50.426403  221603 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:50.826452  221603 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-403885" is "Ready"
	I0110 02:49:50.826483  221603 pod_ready.go:86] duration metric: took 400.056117ms for pod "kube-scheduler-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:49:50.826497  221603 pod_ready.go:40] duration metric: took 1.604160907s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:49:50.926905  221603 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 02:49:50.930644  221603 out.go:203] 
	W0110 02:49:50.934403  221603 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 02:49:50.937498  221603 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:49:50.941402  221603 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-403885" cluster and "default" namespace by default
	I0110 02:49:49.315667  227721 main.go:144] libmachine: Using SSH client type: native
	I0110 02:49:49.316004  227721 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0110 02:49:49.316019  227721 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:49:49.649599  227721 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:49:49.649624  227721 machine.go:97] duration metric: took 4.776867502s to provisionDockerMachine
	I0110 02:49:49.649636  227721 start.go:293] postStartSetup for "newest-cni-733680" (driver="docker")
	I0110 02:49:49.649646  227721 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:49:49.649726  227721 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:49:49.649768  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:49.669276  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:49.771913  227721 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:49:49.775059  227721 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:49:49.775090  227721 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:49:49.775102  227721 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:49:49.775155  227721 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:49:49.775244  227721 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:49:49.775348  227721 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:49:49.782784  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:49:49.799509  227721 start.go:296] duration metric: took 149.859431ms for postStartSetup
	I0110 02:49:49.799594  227721 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:49:49.799650  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:49.816376  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:49.916476  227721 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:49:49.920910  227721 fix.go:56] duration metric: took 5.379023632s for fixHost
	I0110 02:49:49.920936  227721 start.go:83] releasing machines lock for "newest-cni-733680", held for 5.379074707s
	I0110 02:49:49.921003  227721 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-733680
	I0110 02:49:49.937103  227721 ssh_runner.go:195] Run: cat /version.json
	I0110 02:49:49.937151  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:49.937169  227721 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:49:49.937220  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:49.956389  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:49.969424  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:50.165223  227721 ssh_runner.go:195] Run: systemctl --version
	I0110 02:49:50.171905  227721 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:49:50.208815  227721 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:49:50.213295  227721 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:49:50.213420  227721 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:49:50.221364  227721 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:49:50.221389  227721 start.go:496] detecting cgroup driver to use...
	I0110 02:49:50.221420  227721 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:49:50.221465  227721 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:49:50.237057  227721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:49:50.250249  227721 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:49:50.250330  227721 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:49:50.266528  227721 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:49:50.280285  227721 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:49:50.396685  227721 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:49:50.521003  227721 docker.go:234] disabling docker service ...
	I0110 02:49:50.521126  227721 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:49:50.540688  227721 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:49:50.556375  227721 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:49:50.683641  227721 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:49:50.808653  227721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:49:50.822460  227721 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:49:50.846923  227721 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:49:50.846995  227721 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.856884  227721 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:49:50.856953  227721 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.866475  227721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.875337  227721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.883923  227721 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:49:50.892014  227721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.901211  227721 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.909568  227721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:49:50.919195  227721 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:49:50.926804  227721 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:49:50.934752  227721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:49:51.133099  227721 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:49:51.350591  227721 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:49:51.350658  227721 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:49:51.354535  227721 start.go:574] Will wait 60s for crictl version
	I0110 02:49:51.354589  227721 ssh_runner.go:195] Run: which crictl
	I0110 02:49:51.358129  227721 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:49:51.386452  227721 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:49:51.386534  227721 ssh_runner.go:195] Run: crio --version
	I0110 02:49:51.413414  227721 ssh_runner.go:195] Run: crio --version
	I0110 02:49:51.446245  227721 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:49:51.449219  227721 cli_runner.go:164] Run: docker network inspect newest-cni-733680 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:49:51.467434  227721 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:49:51.471068  227721 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:49:51.484080  227721 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0110 02:49:51.487038  227721 kubeadm.go:884] updating cluster {Name:newest-cni-733680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:49:51.487201  227721 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:49:51.487271  227721 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:49:51.530727  227721 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:49:51.530753  227721 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:49:51.530808  227721 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:49:51.559661  227721 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:49:51.559684  227721 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:49:51.559692  227721 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 02:49:51.559859  227721 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-733680 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:49:51.559960  227721 ssh_runner.go:195] Run: crio config
	I0110 02:49:51.649920  227721 cni.go:84] Creating CNI manager for ""
	I0110 02:49:51.649944  227721 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:49:51.649967  227721 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0110 02:49:51.649991  227721 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-733680 NodeName:newest-cni-733680 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:49:51.650121  227721 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-733680"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:49:51.650212  227721 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:49:51.657576  227721 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:49:51.657644  227721 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:49:51.664688  227721 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 02:49:51.677725  227721 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:49:51.691704  227721 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I0110 02:49:51.708568  227721 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:49:51.712042  227721 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:49:51.721970  227721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:49:51.852837  227721 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:49:51.873289  227721 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680 for IP: 192.168.76.2
	I0110 02:49:51.873311  227721 certs.go:195] generating shared ca certs ...
	I0110 02:49:51.873327  227721 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:51.873522  227721 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:49:51.873596  227721 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:49:51.873608  227721 certs.go:257] generating profile certs ...
	I0110 02:49:51.873727  227721 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/client.key
	I0110 02:49:51.873817  227721 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.key.aabe30f3
	I0110 02:49:51.873884  227721 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/proxy-client.key
	I0110 02:49:51.874016  227721 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:49:51.874066  227721 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:49:51.874083  227721 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:49:51.874130  227721 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:49:51.874180  227721 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:49:51.874225  227721 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:49:51.874306  227721 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:49:51.874941  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:49:51.892755  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:49:51.909930  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:49:51.927077  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:49:51.944783  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 02:49:51.962347  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 02:49:51.985129  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:49:52.012727  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/newest-cni-733680/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 02:49:52.036283  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:49:52.069839  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:49:52.094664  227721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:49:52.127365  227721 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:49:52.145181  227721 ssh_runner.go:195] Run: openssl version
	I0110 02:49:52.151902  227721 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:52.160506  227721 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:49:52.168400  227721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:52.172018  227721 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:52.172119  227721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:49:52.214314  227721 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:49:52.222022  227721 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:49:52.229380  227721 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:49:52.236864  227721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:49:52.240516  227721 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:49:52.240627  227721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:49:52.281571  227721 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:49:52.289095  227721 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:49:52.296245  227721 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:49:52.303460  227721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:49:52.306912  227721 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:49:52.307011  227721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:49:52.349325  227721 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:49:52.356877  227721 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:49:52.360480  227721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:49:52.401232  227721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:49:52.442515  227721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:49:52.483877  227721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:49:52.525987  227721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:49:52.571183  227721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:49:52.629363  227721 kubeadm.go:401] StartCluster: {Name:newest-cni-733680 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-733680 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:49:52.629502  227721 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:49:52.629593  227721 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:49:52.707386  227721 cri.go:96] found id: "ea5078ffbb2ab2a6bbaa2c34fad84cb3a86c77b7c22d703c398c8f81af573168"
	I0110 02:49:52.707455  227721 cri.go:96] found id: "564644b28306f6bcee019ff3074f2c75514833f9bf107fbb9bc76939c4b892b7"
	I0110 02:49:52.707485  227721 cri.go:96] found id: "9ede343cb2e278e170c60426ec5533a577c1bde2bb263e32f4b5e061784803c7"
	I0110 02:49:52.707503  227721 cri.go:96] found id: "b2affd0966bb9dbc433519da5f2d62205ba805d51c020dd49252bd59427932dc"
	I0110 02:49:52.707529  227721 cri.go:96] found id: ""
	I0110 02:49:52.707627  227721 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 02:49:52.732021  227721 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:49:52Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:49:52.732136  227721 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:49:52.742812  227721 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:49:52.742890  227721 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:49:52.742967  227721 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:49:52.755410  227721 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:49:52.756033  227721 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-733680" does not appear in /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:49:52.756367  227721 kubeconfig.go:62] /home/jenkins/minikube-integration/22414-2353/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-733680" cluster setting kubeconfig missing "newest-cni-733680" context setting]
	I0110 02:49:52.756808  227721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:52.758406  227721 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:49:52.770028  227721 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 02:49:52.770062  227721 kubeadm.go:602] duration metric: took 27.152503ms to restartPrimaryControlPlane
	I0110 02:49:52.770090  227721 kubeadm.go:403] duration metric: took 140.747566ms to StartCluster
	I0110 02:49:52.770112  227721 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:52.770199  227721 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:49:52.771113  227721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:49:52.771349  227721 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:49:52.771738  227721 config.go:182] Loaded profile config "newest-cni-733680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:49:52.771722  227721 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:49:52.771844  227721 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-733680"
	I0110 02:49:52.771860  227721 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-733680"
	W0110 02:49:52.771866  227721 addons.go:248] addon storage-provisioner should already be in state true
	I0110 02:49:52.771875  227721 addons.go:70] Setting dashboard=true in profile "newest-cni-733680"
	I0110 02:49:52.771890  227721 host.go:66] Checking if "newest-cni-733680" exists ...
	I0110 02:49:52.771898  227721 addons.go:70] Setting default-storageclass=true in profile "newest-cni-733680"
	I0110 02:49:52.771909  227721 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-733680"
	I0110 02:49:52.772185  227721 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:52.772324  227721 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:52.771890  227721 addons.go:239] Setting addon dashboard=true in "newest-cni-733680"
	W0110 02:49:52.772704  227721 addons.go:248] addon dashboard should already be in state true
	I0110 02:49:52.772782  227721 host.go:66] Checking if "newest-cni-733680" exists ...
	I0110 02:49:52.773237  227721 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:52.777211  227721 out.go:179] * Verifying Kubernetes components...
	I0110 02:49:52.780387  227721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:49:52.826239  227721 addons.go:239] Setting addon default-storageclass=true in "newest-cni-733680"
	W0110 02:49:52.826259  227721 addons.go:248] addon default-storageclass should already be in state true
	I0110 02:49:52.826283  227721 host.go:66] Checking if "newest-cni-733680" exists ...
	I0110 02:49:52.826683  227721 cli_runner.go:164] Run: docker container inspect newest-cni-733680 --format={{.State.Status}}
	I0110 02:49:52.834786  227721 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:49:52.838077  227721 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:49:52.838103  227721 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:49:52.838172  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:52.850062  227721 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 02:49:52.853327  227721 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 02:49:52.858599  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 02:49:52.858622  227721 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 02:49:52.858689  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:52.875042  227721 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:49:52.875065  227721 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:49:52.875142  227721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-733680
	I0110 02:49:52.895967  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:52.918908  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:52.923631  227721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/newest-cni-733680/id_rsa Username:docker}
	I0110 02:49:53.067602  227721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:49:53.205587  227721 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:49:53.237603  227721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:49:53.240836  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 02:49:53.240858  227721 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 02:49:53.295176  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 02:49:53.295248  227721 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 02:49:53.350797  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 02:49:53.350872  227721 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 02:49:53.411170  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 02:49:53.411250  227721 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 02:49:53.461048  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 02:49:53.461124  227721 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 02:49:53.492736  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 02:49:53.492809  227721 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 02:49:53.513750  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 02:49:53.513845  227721 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 02:49:53.556492  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 02:49:53.556562  227721 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 02:49:53.581438  227721 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:49:53.581506  227721 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 02:49:53.613630  227721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:49:57.046329  227721 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.978687072s)
	I0110 02:49:57.046422  227721 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.84080946s)
	I0110 02:49:57.046637  227721 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:49:57.046449  227721 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.808826332s)
	I0110 02:49:57.046542  227721 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.432840606s)
	I0110 02:49:57.047187  227721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:49:57.050038  227721 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-733680 addons enable metrics-server
	
	I0110 02:49:57.071486  227721 api_server.go:72] duration metric: took 4.300106932s to wait for apiserver process to appear ...
	I0110 02:49:57.071509  227721 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:49:57.071527  227721 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:49:57.080089  227721 api_server.go:325] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 02:49:57.080120  227721 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 02:49:57.093756  227721 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 02:49:57.096511  227721 addons.go:530] duration metric: took 4.324781462s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 02:49:57.572427  227721 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:49:57.586847  227721 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:49:57.587967  227721 api_server.go:141] control plane version: v1.35.0
	I0110 02:49:57.588028  227721 api_server.go:131] duration metric: took 516.494969ms to wait for apiserver health ...
	I0110 02:49:57.588047  227721 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:49:57.592332  227721 system_pods.go:59] 8 kube-system pods found
	I0110 02:49:57.592387  227721 system_pods.go:61] "coredns-7d764666f9-7djps" [5b40a1e2-6d92-4e33-8a94-19e45bc18937] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 02:49:57.592406  227721 system_pods.go:61] "etcd-newest-cni-733680" [15903ffb-9c75-402b-aaf1-2ea433e993a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:49:57.592415  227721 system_pods.go:61] "kindnet-bnwfz" [49fa87e1-f1d9-4315-918f-b079caade618] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 02:49:57.592429  227721 system_pods.go:61] "kube-apiserver-newest-cni-733680" [7daa7cb6-3a2f-4d11-aadd-c7aab970ff4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:49:57.592437  227721 system_pods.go:61] "kube-controller-manager-newest-cni-733680" [f5a37a0a-47e7-41e5-a01f-16efb8c43166] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:49:57.592476  227721 system_pods.go:61] "kube-proxy-mnr64" [0f5cc6b8-6364-4711-a838-fb70b057c4ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 02:49:57.592490  227721 system_pods.go:61] "kube-scheduler-newest-cni-733680" [d5f0e1ba-9a06-4172-a5ec-140a553a47ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:49:57.592502  227721 system_pods.go:61] "storage-provisioner" [1a9287f6-2918-4098-8444-2f1c2c4dda71] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 02:49:57.592514  227721 system_pods.go:74] duration metric: took 4.460764ms to wait for pod list to return data ...
	I0110 02:49:57.592543  227721 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:49:57.594834  227721 default_sa.go:45] found service account: "default"
	I0110 02:49:57.594857  227721 default_sa.go:55] duration metric: took 2.300878ms for default service account to be created ...
	I0110 02:49:57.594870  227721 kubeadm.go:587] duration metric: took 4.823494248s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 02:49:57.594915  227721 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:49:57.597527  227721 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:49:57.597564  227721 node_conditions.go:123] node cpu capacity is 2
	I0110 02:49:57.597577  227721 node_conditions.go:105] duration metric: took 2.655954ms to run NodePressure ...
	I0110 02:49:57.597610  227721 start.go:242] waiting for startup goroutines ...
	I0110 02:49:57.597619  227721 start.go:247] waiting for cluster config update ...
	I0110 02:49:57.597664  227721 start.go:256] writing updated cluster config ...
	I0110 02:49:57.597984  227721 ssh_runner.go:195] Run: rm -f paused
	I0110 02:49:57.679374  227721 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 02:49:57.682549  227721 out.go:203] 
	W0110 02:49:57.685626  227721 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 02:49:57.690644  227721 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:49:57.693501  227721 out.go:179] * Done! kubectl is now configured to use "newest-cni-733680" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:49:48 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:48.012370058Z" level=info msg="Created container 4d952b2d80577621cd1bf73425ae8b3f1a645d4fb4a19e7be1a860d001e72d1b: kube-system/coredns-7d764666f9-sck2c/coredns" id=e730c699-bba3-49ea-828e-670a950ca168 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:49:48 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:48.016625625Z" level=info msg="Starting container: 4d952b2d80577621cd1bf73425ae8b3f1a645d4fb4a19e7be1a860d001e72d1b" id=63acdd20-42ad-4b70-a605-0a3ef443d796 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:49:48 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:48.028272193Z" level=info msg="Started container" PID=1753 containerID=4d952b2d80577621cd1bf73425ae8b3f1a645d4fb4a19e7be1a860d001e72d1b description=kube-system/coredns-7d764666f9-sck2c/coredns id=63acdd20-42ad-4b70-a605-0a3ef443d796 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9473658bdec948492a8826bc57b6886e50b2b5eba232b2d1676188c08b7feb20
	Jan 10 02:49:51 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:51.578237064Z" level=info msg="Running pod sandbox: default/busybox/POD" id=fddf7385-e847-41fc-bb90-5a225e9afdd6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:49:51 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:51.578314698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:51 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:51.583238055Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:40bab50e8152445406c51638aa0debd922d030928f3dac7a03832d59468e8451 UID:e41c4ad3-2a0a-45ba-8fe8-31dfd38ad288 NetNS:/var/run/netns/6ac95a0f-642f-43b2-964a-e2d1082c8e2c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004ab120}] Aliases:map[]}"
	Jan 10 02:49:51 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:51.583271252Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 10 02:49:51 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:51.603380028Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:40bab50e8152445406c51638aa0debd922d030928f3dac7a03832d59468e8451 UID:e41c4ad3-2a0a-45ba-8fe8-31dfd38ad288 NetNS:/var/run/netns/6ac95a0f-642f-43b2-964a-e2d1082c8e2c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004ab120}] Aliases:map[]}"
	Jan 10 02:49:51 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:51.60356436Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 10 02:49:51 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:51.607512456Z" level=info msg="Ran pod sandbox 40bab50e8152445406c51638aa0debd922d030928f3dac7a03832d59468e8451 with infra container: default/busybox/POD" id=fddf7385-e847-41fc-bb90-5a225e9afdd6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 02:49:51 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:51.608877392Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3087b0ec-e8da-4ab7-92c8-11d6b2fd9e93 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:51 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:51.609034558Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3087b0ec-e8da-4ab7-92c8-11d6b2fd9e93 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:51 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:51.60912089Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=3087b0ec-e8da-4ab7-92c8-11d6b2fd9e93 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:51 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:51.612499467Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=749e4ed0-f34e-42e9-9b6a-39c7e1268e74 name=/runtime.v1.ImageService/PullImage
	Jan 10 02:49:51 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:51.612889881Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 10 02:49:53 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:53.759994873Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=749e4ed0-f34e-42e9-9b6a-39c7e1268e74 name=/runtime.v1.ImageService/PullImage
	Jan 10 02:49:53 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:53.760949375Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4a436dcf-17ac-4a17-9a3d-51c29469b322 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:53 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:53.762849616Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=516db881-1944-4ad6-b121-bf73e0f8eb13 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:49:53 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:53.768200137Z" level=info msg="Creating container: default/busybox/busybox" id=06375860-4e21-45e1-a1a4-69a19094794a name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:49:53 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:53.768345233Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:53 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:53.773457327Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:53 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:53.773921118Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:49:53 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:53.792367661Z" level=info msg="Created container f9c678ae7fb8ea65e76c23d7892957b40fbda68d3876d92b1922488f0f288dda: default/busybox/busybox" id=06375860-4e21-45e1-a1a4-69a19094794a name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:49:53 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:53.795517157Z" level=info msg="Starting container: f9c678ae7fb8ea65e76c23d7892957b40fbda68d3876d92b1922488f0f288dda" id=29067d31-5cc0-471a-a987-0810228da175 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:49:53 default-k8s-diff-port-403885 crio[836]: time="2026-01-10T02:49:53.80304627Z" level=info msg="Started container" PID=1819 containerID=f9c678ae7fb8ea65e76c23d7892957b40fbda68d3876d92b1922488f0f288dda description=default/busybox/busybox id=29067d31-5cc0-471a-a987-0810228da175 name=/runtime.v1.RuntimeService/StartContainer sandboxID=40bab50e8152445406c51638aa0debd922d030928f3dac7a03832d59468e8451
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	f9c678ae7fb8e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   40bab50e81524       busybox                                                default
	4d952b2d80577       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                      14 seconds ago      Running             coredns                   0                   9473658bdec94       coredns-7d764666f9-sck2c                               kube-system
	d929f628d8afa       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago      Running             storage-provisioner       0                   c6b3a6fef3c39       storage-provisioner                                    kube-system
	fabd293279500       docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3    25 seconds ago      Running             kindnet-cni               0                   cc5655fa45ae4       kindnet-4h8vm                                          kube-system
	695e9f59c19d2       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                      27 seconds ago      Running             kube-proxy                0                   d0252973f6859       kube-proxy-ss9fs                                       kube-system
	263afb376e9f7       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                      40 seconds ago      Running             kube-controller-manager   0                   fd74ed7aa6804       kube-controller-manager-default-k8s-diff-port-403885   kube-system
	db023c741355c       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                      40 seconds ago      Running             etcd                      0                   5f5585d75d716       etcd-default-k8s-diff-port-403885                      kube-system
	8f95d24a745b7       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                      40 seconds ago      Running             kube-apiserver            0                   de656b404c681       kube-apiserver-default-k8s-diff-port-403885            kube-system
	1dd6812d24161       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                      40 seconds ago      Running             kube-scheduler            0                   5238e56ebd2c8       kube-scheduler-default-k8s-diff-port-403885            kube-system
	
	
	==> coredns [4d952b2d80577621cd1bf73425ae8b3f1a645d4fb4a19e7be1a860d001e72d1b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:34824 - 43293 "HINFO IN 3318885830748339278.6178306130652428390. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003878937s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-403885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-403885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=default-k8s-diff-port-403885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_49_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:49:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-403885
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:49:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:49:59 +0000   Sat, 10 Jan 2026 02:49:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:49:59 +0000   Sat, 10 Jan 2026 02:49:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:49:59 +0000   Sat, 10 Jan 2026 02:49:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:49:59 +0000   Sat, 10 Jan 2026 02:49:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-403885
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                c5be33d9-0382-423b-9b90-3c979c14f2d9
	  Boot ID:                    41f52236-76b9-4dc8-bfe3-121fbdeb9659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-7d764666f9-sck2c                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-default-k8s-diff-port-403885                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-4h8vm                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-default-k8s-diff-port-403885             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-403885    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-ss9fs                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-default-k8s-diff-port-403885             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  30s   node-controller  Node default-k8s-diff-port-403885 event: Registered Node default-k8s-diff-port-403885 in Controller
	
	
	==> dmesg <==
	[Jan10 02:17] overlayfs: idmapped layers are currently not supported
	[Jan10 02:18] overlayfs: idmapped layers are currently not supported
	[ +23.212409] overlayfs: idmapped layers are currently not supported
	[  +9.866169] overlayfs: idmapped layers are currently not supported
	[Jan10 02:19] overlayfs: idmapped layers are currently not supported
	[Jan10 02:20] overlayfs: idmapped layers are currently not supported
	[ +35.960240] overlayfs: idmapped layers are currently not supported
	[Jan10 02:21] overlayfs: idmapped layers are currently not supported
	[Jan10 02:23] overlayfs: idmapped layers are currently not supported
	[Jan10 02:24] overlayfs: idmapped layers are currently not supported
	[Jan10 02:25] overlayfs: idmapped layers are currently not supported
	[Jan10 02:30] overlayfs: idmapped layers are currently not supported
	[Jan10 02:31] overlayfs: idmapped layers are currently not supported
	[Jan10 02:35] overlayfs: idmapped layers are currently not supported
	[Jan10 02:37] overlayfs: idmapped layers are currently not supported
	[Jan10 02:41] overlayfs: idmapped layers are currently not supported
	[ +37.534021] overlayfs: idmapped layers are currently not supported
	[Jan10 02:43] overlayfs: idmapped layers are currently not supported
	[Jan10 02:44] overlayfs: idmapped layers are currently not supported
	[Jan10 02:45] overlayfs: idmapped layers are currently not supported
	[Jan10 02:46] overlayfs: idmapped layers are currently not supported
	[Jan10 02:48] overlayfs: idmapped layers are currently not supported
	[Jan10 02:49] overlayfs: idmapped layers are currently not supported
	[  +4.690964] overlayfs: idmapped layers are currently not supported
	[ +26.361261] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [db023c741355c2645d051f436535a3fee63e4d963541d64a2afeb2c939175fbe] <==
	{"level":"info","ts":"2026-01-10T02:49:21.719379Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:49:22.703851Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T02:49:22.703963Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T02:49:22.704035Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2026-01-10T02:49:22.704090Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:49:22.704142Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:49:22.711837Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T02:49:22.711952Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:49:22.712000Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2026-01-10T02:49:22.712035Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T02:49:22.715860Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:49:22.717003Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-403885 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:49:22.717229Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:49:22.717348Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:49:22.717404Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T02:49:22.717484Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T02:49:22.717585Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T02:49:22.717574Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:49:22.717631Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:49:22.755087Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:49:22.764398Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:49:22.764469Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:49:22.764490Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T02:49:22.781386Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:49:22.782669Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 02:50:02 up  1:32,  0 user,  load average: 3.94, 2.73, 2.16
	Linux default-k8s-diff-port-403885 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fabd2932795008091bc2161a51157425c8c2ec664af81f8cacd5862e416fbdf7] <==
	I0110 02:49:37.225076       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:49:37.225956       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0110 02:49:37.226209       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:49:37.226282       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:49:37.226306       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:49:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:49:37.426069       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:49:37.426625       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:49:37.427005       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:49:37.427934       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 02:49:37.628598       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:49:37.628690       1 metrics.go:72] Registering metrics
	I0110 02:49:37.628772       1 controller.go:711] "Syncing nftables rules"
	I0110 02:49:47.427147       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:49:47.427215       1 main.go:301] handling current node
	I0110 02:49:57.427896       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:49:57.428005       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8f95d24a745b71613396e301b4e525aa74fcc16c589aad543e3c2467ca9d306c] <==
	I0110 02:49:25.896655       1 policy_source.go:248] refreshing policies
	E0110 02:49:25.928680       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I0110 02:49:25.977855       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:49:26.004851       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 02:49:26.005692       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:49:26.015115       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:49:26.068861       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 02:49:26.381587       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 02:49:26.389495       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 02:49:26.390215       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:49:27.395687       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:49:27.455255       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:49:27.580386       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 02:49:27.587587       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0110 02:49:27.588710       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 02:49:27.598137       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:49:27.633044       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:49:28.628611       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:49:28.666664       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 02:49:28.694305       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 02:49:33.407631       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:49:33.424851       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 02:49:33.492942       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0110 02:49:33.493100       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0110 02:49:33.570464       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [263afb376e9f73ee95d342467badf13425d78df3ca560324dd17973afa6e3065] <==
	I0110 02:49:32.489294       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:49:32.489298       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:32.489566       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:32.481244       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:49:32.490200       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:32.490348       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:32.490398       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:32.492912       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:32.492993       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:32.493094       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:32.494309       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:32.497525       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:32.497617       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:32.498220       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:32.499150       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:32.504058       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:32.508341       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:32.508617       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:32.512017       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:32.517430       1 range_allocator.go:433] "Set node PodCIDR" node="default-k8s-diff-port-403885" podCIDRs=["10.244.0.0/24"]
	I0110 02:49:32.590509       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:32.595986       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:32.596076       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:49:32.596105       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:49:52.481949       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [695e9f59c19d209b2e14281daa72c2a9a8dfb2877d42642595a8b83def112397] <==
	I0110 02:49:34.602485       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:49:34.714479       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:49:34.815115       1 shared_informer.go:377] "Caches are synced"
	I0110 02:49:34.815173       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0110 02:49:34.815258       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:49:34.870056       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:49:34.870116       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:49:34.875472       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:49:34.875764       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:49:34.875779       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:49:34.877693       1 config.go:200] "Starting service config controller"
	I0110 02:49:34.877709       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:49:34.877725       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:49:34.877729       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:49:34.877751       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:49:34.877755       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:49:34.884743       1 config.go:309] "Starting node config controller"
	I0110 02:49:34.884763       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:49:34.884771       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:49:34.983157       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:49:34.983192       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 02:49:34.983236       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1dd6812d2416188fed2e2968268700b769ca2dfa7ab2eda74fb1b7013294c84e] <==
	E0110 02:49:25.852265       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 02:49:25.852312       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 02:49:25.854470       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 02:49:25.854585       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 02:49:25.856511       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 02:49:25.856668       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 02:49:25.856914       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 02:49:25.857024       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 02:49:25.857148       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 02:49:25.857233       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 02:49:25.857362       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 02:49:25.857545       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 02:49:26.672112       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 02:49:26.713631       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 02:49:26.713921       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 02:49:26.713978       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 02:49:26.717047       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 02:49:26.753857       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 02:49:26.775820       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 02:49:26.788048       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1268" type="*v1.ConfigMap"
	E0110 02:49:26.812004       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 02:49:26.974482       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 02:49:27.070930       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 02:49:27.093484       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	I0110 02:49:29.718575       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:49:33 default-k8s-diff-port-403885 kubelet[1284]: I0110 02:49:33.646093    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5820215-db87-4ed2-99a4-3970efeca785-lib-modules\") pod \"kindnet-4h8vm\" (UID: \"f5820215-db87-4ed2-99a4-3970efeca785\") " pod="kube-system/kindnet-4h8vm"
	Jan 10 02:49:33 default-k8s-diff-port-403885 kubelet[1284]: I0110 02:49:33.646205    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/61ec02c4-f966-46de-bd2f-81de41f8f1bb-kube-proxy\") pod \"kube-proxy-ss9fs\" (UID: \"61ec02c4-f966-46de-bd2f-81de41f8f1bb\") " pod="kube-system/kube-proxy-ss9fs"
	Jan 10 02:49:33 default-k8s-diff-port-403885 kubelet[1284]: I0110 02:49:33.646271    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh2p2\" (UniqueName: \"kubernetes.io/projected/61ec02c4-f966-46de-bd2f-81de41f8f1bb-kube-api-access-qh2p2\") pod \"kube-proxy-ss9fs\" (UID: \"61ec02c4-f966-46de-bd2f-81de41f8f1bb\") " pod="kube-system/kube-proxy-ss9fs"
	Jan 10 02:49:33 default-k8s-diff-port-403885 kubelet[1284]: I0110 02:49:33.842544    1284 swap_util.go:78] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Jan 10 02:49:34 default-k8s-diff-port-403885 kubelet[1284]: W0110 02:49:34.264263    1284 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82/crio-d0252973f685986dd472166718fc823a9449bafb2fd8d144b7836bb469f1ea35 WatchSource:0}: Error finding container d0252973f685986dd472166718fc823a9449bafb2fd8d144b7836bb469f1ea35: Status 404 returned error can't find the container with id d0252973f685986dd472166718fc823a9449bafb2fd8d144b7836bb469f1ea35
	Jan 10 02:49:35 default-k8s-diff-port-403885 kubelet[1284]: E0110 02:49:35.226493    1284 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-403885" containerName="etcd"
	Jan 10 02:49:35 default-k8s-diff-port-403885 kubelet[1284]: I0110 02:49:35.242431    1284 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-ss9fs" podStartSLOduration=2.242413185 podStartE2EDuration="2.242413185s" podCreationTimestamp="2026-01-10 02:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:49:34.947080965 +0000 UTC m=+6.457208619" watchObservedRunningTime="2026-01-10 02:49:35.242413185 +0000 UTC m=+6.752540830"
	Jan 10 02:49:38 default-k8s-diff-port-403885 kubelet[1284]: I0110 02:49:38.006007    1284 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-4h8vm" podStartSLOduration=2.253085555 podStartE2EDuration="5.005988945s" podCreationTimestamp="2026-01-10 02:49:33 +0000 UTC" firstStartedPulling="2026-01-10 02:49:34.325157312 +0000 UTC m=+5.835284957" lastFinishedPulling="2026-01-10 02:49:37.078060693 +0000 UTC m=+8.588188347" observedRunningTime="2026-01-10 02:49:37.993404483 +0000 UTC m=+9.503532129" watchObservedRunningTime="2026-01-10 02:49:38.005988945 +0000 UTC m=+9.516116591"
	Jan 10 02:49:39 default-k8s-diff-port-403885 kubelet[1284]: E0110 02:49:39.669166    1284 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-403885" containerName="kube-controller-manager"
	Jan 10 02:49:41 default-k8s-diff-port-403885 kubelet[1284]: E0110 02:49:41.578206    1284 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-403885" containerName="kube-scheduler"
	Jan 10 02:49:41 default-k8s-diff-port-403885 kubelet[1284]: E0110 02:49:41.805546    1284 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-403885" containerName="kube-apiserver"
	Jan 10 02:49:45 default-k8s-diff-port-403885 kubelet[1284]: E0110 02:49:45.228064    1284 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-403885" containerName="etcd"
	Jan 10 02:49:47 default-k8s-diff-port-403885 kubelet[1284]: I0110 02:49:47.547351    1284 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 10 02:49:47 default-k8s-diff-port-403885 kubelet[1284]: I0110 02:49:47.690729    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/83270b0f-e37c-4f07-a380-3ffb6386d492-tmp\") pod \"storage-provisioner\" (UID: \"83270b0f-e37c-4f07-a380-3ffb6386d492\") " pod="kube-system/storage-provisioner"
	Jan 10 02:49:47 default-k8s-diff-port-403885 kubelet[1284]: I0110 02:49:47.690970    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkc4b\" (UniqueName: \"kubernetes.io/projected/83270b0f-e37c-4f07-a380-3ffb6386d492-kube-api-access-qkc4b\") pod \"storage-provisioner\" (UID: \"83270b0f-e37c-4f07-a380-3ffb6386d492\") " pod="kube-system/storage-provisioner"
	Jan 10 02:49:47 default-k8s-diff-port-403885 kubelet[1284]: I0110 02:49:47.691069    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8791efbb-04ac-4811-983a-9ffaf7bb15be-config-volume\") pod \"coredns-7d764666f9-sck2c\" (UID: \"8791efbb-04ac-4811-983a-9ffaf7bb15be\") " pod="kube-system/coredns-7d764666f9-sck2c"
	Jan 10 02:49:47 default-k8s-diff-port-403885 kubelet[1284]: I0110 02:49:47.691159    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f8km\" (UniqueName: \"kubernetes.io/projected/8791efbb-04ac-4811-983a-9ffaf7bb15be-kube-api-access-7f8km\") pod \"coredns-7d764666f9-sck2c\" (UID: \"8791efbb-04ac-4811-983a-9ffaf7bb15be\") " pod="kube-system/coredns-7d764666f9-sck2c"
	Jan 10 02:49:47 default-k8s-diff-port-403885 kubelet[1284]: W0110 02:49:47.942859    1284 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82/crio-9473658bdec948492a8826bc57b6886e50b2b5eba232b2d1676188c08b7feb20 WatchSource:0}: Error finding container 9473658bdec948492a8826bc57b6886e50b2b5eba232b2d1676188c08b7feb20: Status 404 returned error can't find the container with id 9473658bdec948492a8826bc57b6886e50b2b5eba232b2d1676188c08b7feb20
	Jan 10 02:49:48 default-k8s-diff-port-403885 kubelet[1284]: E0110 02:49:48.988134    1284 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-sck2c" containerName="coredns"
	Jan 10 02:49:49 default-k8s-diff-port-403885 kubelet[1284]: I0110 02:49:49.071931    1284 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-sck2c" podStartSLOduration=16.071900286 podStartE2EDuration="16.071900286s" podCreationTimestamp="2026-01-10 02:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:49:49.02942398 +0000 UTC m=+20.539551634" watchObservedRunningTime="2026-01-10 02:49:49.071900286 +0000 UTC m=+20.582027973"
	Jan 10 02:49:49 default-k8s-diff-port-403885 kubelet[1284]: I0110 02:49:49.092144    1284 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.092120821 podStartE2EDuration="14.092120821s" podCreationTimestamp="2026-01-10 02:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 02:49:49.073137505 +0000 UTC m=+20.583265159" watchObservedRunningTime="2026-01-10 02:49:49.092120821 +0000 UTC m=+20.602248475"
	Jan 10 02:49:50 default-k8s-diff-port-403885 kubelet[1284]: E0110 02:49:50.018751    1284 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-sck2c" containerName="coredns"
	Jan 10 02:49:51 default-k8s-diff-port-403885 kubelet[1284]: E0110 02:49:51.021804    1284 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-sck2c" containerName="coredns"
	Jan 10 02:49:51 default-k8s-diff-port-403885 kubelet[1284]: I0110 02:49:51.340199    1284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh54t\" (UniqueName: \"kubernetes.io/projected/e41c4ad3-2a0a-45ba-8fe8-31dfd38ad288-kube-api-access-vh54t\") pod \"busybox\" (UID: \"e41c4ad3-2a0a-45ba-8fe8-31dfd38ad288\") " pod="default/busybox"
	Jan 10 02:49:51 default-k8s-diff-port-403885 kubelet[1284]: W0110 02:49:51.605474    1284 manager.go:1172] Failed to process watch event {EventType:0 Name:/docker/68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82/crio-40bab50e8152445406c51638aa0debd922d030928f3dac7a03832d59468e8451 WatchSource:0}: Error finding container 40bab50e8152445406c51638aa0debd922d030928f3dac7a03832d59468e8451: Status 404 returned error can't find the container with id 40bab50e8152445406c51638aa0debd922d030928f3dac7a03832d59468e8451
	
	
	==> storage-provisioner [d929f628d8afad1df60cb9ac80b14a23bd49047aa8b0c82304cfeb719641a271] <==
	I0110 02:49:48.036418       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:49:48.053595       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:49:48.053654       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 02:49:48.056262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:49:48.062353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:49:48.062510       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:49:48.062668       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-403885_8aca4a07-fc21-478d-ae9c-600b8c30b3f4!
	I0110 02:49:48.063571       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa1af1bb-334c-4465-9421-7ffe1f5fe2f3", APIVersion:"v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-403885_8aca4a07-fc21-478d-ae9c-600b8c30b3f4 became leader
	W0110 02:49:48.086556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:49:48.121671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:49:48.163169       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-403885_8aca4a07-fc21-478d-ae9c-600b8c30b3f4!
	W0110 02:49:50.126918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:49:50.132180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:49:52.135546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:49:52.140978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:49:54.144425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:49:54.153271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:49:56.156130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:49:56.160431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:49:58.167913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:49:58.172967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:50:00.187692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:50:00.202500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:50:02.206835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:50:02.214603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-403885 -n default-k8s-diff-port-403885
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-403885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-403885 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-403885 --alsologtostderr -v=1: exit status 80 (1.79824444s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-403885 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:51:19.988476  237156 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:51:19.988674  237156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:51:19.988702  237156 out.go:374] Setting ErrFile to fd 2...
	I0110 02:51:19.988729  237156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:51:19.989117  237156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:51:19.989498  237156 out.go:368] Setting JSON to false
	I0110 02:51:19.989551  237156 mustload.go:66] Loading cluster: default-k8s-diff-port-403885
	I0110 02:51:19.990239  237156 config.go:182] Loaded profile config "default-k8s-diff-port-403885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:51:19.991061  237156 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-403885 --format={{.State.Status}}
	I0110 02:51:20.015256  237156 host.go:66] Checking if "default-k8s-diff-port-403885" exists ...
	I0110 02:51:20.015645  237156 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:51:20.076007  237156 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2026-01-10 02:51:20.066179832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:51:20.076679  237156 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22414/minikube-v1.37.0-1767924026-22414-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767924026-22414/minikube-v1.37.0-1767924026-22414-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767924026-22414-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:default-k8s-diff-port-403885 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarni
ng:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 02:51:20.080091  237156 out.go:179] * Pausing node default-k8s-diff-port-403885 ... 
	I0110 02:51:20.083015  237156 host.go:66] Checking if "default-k8s-diff-port-403885" exists ...
	I0110 02:51:20.083413  237156 ssh_runner.go:195] Run: systemctl --version
	I0110 02:51:20.083470  237156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:51:20.103916  237156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:51:20.210684  237156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:51:20.232300  237156 pause.go:52] kubelet running: true
	I0110 02:51:20.232371  237156 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:51:20.491647  237156 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:51:20.491756  237156 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:51:20.558387  237156 cri.go:96] found id: "910d0dab6a77a8bbc264e0daefb03ba43c5628340ff9b5c13ebb7ce7186fb91b"
	I0110 02:51:20.558461  237156 cri.go:96] found id: "fa1837feaa39e020154dcf1fa0e3cfdcc389657a13b2a7911522e055b8d5c205"
	I0110 02:51:20.558481  237156 cri.go:96] found id: "bc847f22b7096dcdd6f43e2f7fd0bb0bec20221d544edea551adf64088c8d1f9"
	I0110 02:51:20.558499  237156 cri.go:96] found id: "b162bf1f9ee4d4f795e11b6f38b571c7fe566f4233b8a46e579adb8bdc2bdc39"
	I0110 02:51:20.558537  237156 cri.go:96] found id: "f3a4dab3b349942b6b00dff9e9ac2e0626b5c7d8890cd8aa838981f79a2d1c23"
	I0110 02:51:20.558561  237156 cri.go:96] found id: "ccaef7514d5ac77efad7a1f91ebd93145b03e3462b1154eb3f723d4a112acdf6"
	I0110 02:51:20.558580  237156 cri.go:96] found id: "0a4524e2475eb21f6dbb36bb2158a508c31075979547ca9c30367ced5eab40f6"
	I0110 02:51:20.558600  237156 cri.go:96] found id: "73f1ff616118310e5ea35e9ab5496ff345a48d4dbb2a0c33eba742bc17a20097"
	I0110 02:51:20.558620  237156 cri.go:96] found id: "3eef2c483d9e9a9a8a3ab6bf3e8cf944fedbac4b9d208f8d1d0c7a3086f100ac"
	I0110 02:51:20.558650  237156 cri.go:96] found id: "cea7cb71f60a3bf1cc0f92af289dd1a12432224c4eec12b146d182a413e0d79a"
	I0110 02:51:20.558673  237156 cri.go:96] found id: "c189fb7bad01fb39bc337c1bc6565e786ee9e056431d9067b94b07b97b2427cc"
	I0110 02:51:20.558692  237156 cri.go:96] found id: ""
	I0110 02:51:20.558779  237156 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:51:20.578211  237156 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:51:20Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:51:20.748628  237156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:51:20.761481  237156 pause.go:52] kubelet running: false
	I0110 02:51:20.761550  237156 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:51:20.931121  237156 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:51:20.931226  237156 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:51:21.002682  237156 cri.go:96] found id: "910d0dab6a77a8bbc264e0daefb03ba43c5628340ff9b5c13ebb7ce7186fb91b"
	I0110 02:51:21.002712  237156 cri.go:96] found id: "fa1837feaa39e020154dcf1fa0e3cfdcc389657a13b2a7911522e055b8d5c205"
	I0110 02:51:21.002726  237156 cri.go:96] found id: "bc847f22b7096dcdd6f43e2f7fd0bb0bec20221d544edea551adf64088c8d1f9"
	I0110 02:51:21.002730  237156 cri.go:96] found id: "b162bf1f9ee4d4f795e11b6f38b571c7fe566f4233b8a46e579adb8bdc2bdc39"
	I0110 02:51:21.002738  237156 cri.go:96] found id: "f3a4dab3b349942b6b00dff9e9ac2e0626b5c7d8890cd8aa838981f79a2d1c23"
	I0110 02:51:21.002741  237156 cri.go:96] found id: "ccaef7514d5ac77efad7a1f91ebd93145b03e3462b1154eb3f723d4a112acdf6"
	I0110 02:51:21.002744  237156 cri.go:96] found id: "0a4524e2475eb21f6dbb36bb2158a508c31075979547ca9c30367ced5eab40f6"
	I0110 02:51:21.002750  237156 cri.go:96] found id: "73f1ff616118310e5ea35e9ab5496ff345a48d4dbb2a0c33eba742bc17a20097"
	I0110 02:51:21.002759  237156 cri.go:96] found id: "3eef2c483d9e9a9a8a3ab6bf3e8cf944fedbac4b9d208f8d1d0c7a3086f100ac"
	I0110 02:51:21.002764  237156 cri.go:96] found id: "cea7cb71f60a3bf1cc0f92af289dd1a12432224c4eec12b146d182a413e0d79a"
	I0110 02:51:21.002771  237156 cri.go:96] found id: "c189fb7bad01fb39bc337c1bc6565e786ee9e056431d9067b94b07b97b2427cc"
	I0110 02:51:21.002803  237156 cri.go:96] found id: ""
	I0110 02:51:21.002877  237156 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:51:21.442574  237156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:51:21.455607  237156 pause.go:52] kubelet running: false
	I0110 02:51:21.455693  237156 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 02:51:21.636912  237156 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 02:51:21.637074  237156 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 02:51:21.710368  237156 cri.go:96] found id: "910d0dab6a77a8bbc264e0daefb03ba43c5628340ff9b5c13ebb7ce7186fb91b"
	I0110 02:51:21.710392  237156 cri.go:96] found id: "fa1837feaa39e020154dcf1fa0e3cfdcc389657a13b2a7911522e055b8d5c205"
	I0110 02:51:21.710397  237156 cri.go:96] found id: "bc847f22b7096dcdd6f43e2f7fd0bb0bec20221d544edea551adf64088c8d1f9"
	I0110 02:51:21.710401  237156 cri.go:96] found id: "b162bf1f9ee4d4f795e11b6f38b571c7fe566f4233b8a46e579adb8bdc2bdc39"
	I0110 02:51:21.710405  237156 cri.go:96] found id: "f3a4dab3b349942b6b00dff9e9ac2e0626b5c7d8890cd8aa838981f79a2d1c23"
	I0110 02:51:21.710408  237156 cri.go:96] found id: "ccaef7514d5ac77efad7a1f91ebd93145b03e3462b1154eb3f723d4a112acdf6"
	I0110 02:51:21.710439  237156 cri.go:96] found id: "0a4524e2475eb21f6dbb36bb2158a508c31075979547ca9c30367ced5eab40f6"
	I0110 02:51:21.710448  237156 cri.go:96] found id: "73f1ff616118310e5ea35e9ab5496ff345a48d4dbb2a0c33eba742bc17a20097"
	I0110 02:51:21.710452  237156 cri.go:96] found id: "3eef2c483d9e9a9a8a3ab6bf3e8cf944fedbac4b9d208f8d1d0c7a3086f100ac"
	I0110 02:51:21.710458  237156 cri.go:96] found id: "cea7cb71f60a3bf1cc0f92af289dd1a12432224c4eec12b146d182a413e0d79a"
	I0110 02:51:21.710461  237156 cri.go:96] found id: "c189fb7bad01fb39bc337c1bc6565e786ee9e056431d9067b94b07b97b2427cc"
	I0110 02:51:21.710464  237156 cri.go:96] found id: ""
	I0110 02:51:21.710538  237156 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 02:51:21.725171  237156 out.go:203] 
	W0110 02:51:21.728112  237156 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:51:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:51:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 02:51:21.728132  237156 out.go:285] * 
	* 
	W0110 02:51:21.730935  237156 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 02:51:21.733827  237156 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-403885 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-403885
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-403885:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82",
	        "Created": "2026-01-10T02:49:07.169528975Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 232060,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:50:17.249244298Z",
	            "FinishedAt": "2026-01-10T02:50:15.588512013Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82/hostname",
	        "HostsPath": "/var/lib/docker/containers/68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82/hosts",
	        "LogPath": "/var/lib/docker/containers/68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82/68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82-json.log",
	        "Name": "/default-k8s-diff-port-403885",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-403885:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-403885",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82",
	                "LowerDir": "/var/lib/docker/overlay2/d01d645ede148d5aa173ad4f705d7224af0ded80634c399615f32176e47ffff2-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d01d645ede148d5aa173ad4f705d7224af0ded80634c399615f32176e47ffff2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d01d645ede148d5aa173ad4f705d7224af0ded80634c399615f32176e47ffff2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d01d645ede148d5aa173ad4f705d7224af0ded80634c399615f32176e47ffff2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-403885",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-403885/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-403885",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-403885",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-403885",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dc182adaacf2631bf689d67ee793e1431bb8c1879d8cf590f0bce4e4ffd1712e",
	            "SandboxKey": "/var/run/docker/netns/dc182adaacf2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-403885": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:2f:ea:4b:05:77",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "634ff5f67b9d8e0ced560118940fddc59ecaca247334cc034944724496472f4d",
	                    "EndpointID": "72c1c2908f39f742654576e114fe2b624310b39150f18e6a071299b3a9fd1ee4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-403885",
	                        "68becb0d3e52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-403885 -n default-k8s-diff-port-403885
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-403885 -n default-k8s-diff-port-403885: exit status 2 (339.034782ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-403885 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-403885 logs -n 25: (1.35297401s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ delete  │ -p no-preload-676905                                                                                                                                                                                                                          │ no-preload-676905                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ delete  │ -p no-preload-676905                                                                                                                                                                                                                          │ no-preload-676905                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ start   │ -p newest-cni-733680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-733680                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ addons  │ enable metrics-server -p newest-cni-733680 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-733680                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │                     │
	│ stop    │ -p newest-cni-733680 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-733680                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ addons  │ enable dashboard -p newest-cni-733680 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-733680                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ start   │ -p newest-cni-733680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-733680                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ image   │ newest-cni-733680 image list --format=json                                                                                                                                                                                                    │ newest-cni-733680                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ pause   │ -p newest-cni-733680 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-733680                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-403885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-403885      │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-403885 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-403885      │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │ 10 Jan 26 02:50 UTC │
	│ delete  │ -p newest-cni-733680                                                                                                                                                                                                                          │ newest-cni-733680                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │ 10 Jan 26 02:50 UTC │
	│ delete  │ -p newest-cni-733680                                                                                                                                                                                                                          │ newest-cni-733680                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │ 10 Jan 26 02:50 UTC │
	│ start   │ -p test-preload-dl-gcs-876828 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-876828        │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-876828                                                                                                                                                                                                                 │ test-preload-dl-gcs-876828        │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │ 10 Jan 26 02:50 UTC │
	│ start   │ -p test-preload-dl-github-831481 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-831481     │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-403885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-403885      │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │ 10 Jan 26 02:50 UTC │
	│ start   │ -p default-k8s-diff-port-403885 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-403885      │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │ 10 Jan 26 02:51 UTC │
	│ delete  │ -p test-preload-dl-github-831481                                                                                                                                                                                                              │ test-preload-dl-github-831481     │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │ 10 Jan 26 02:50 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-145587 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-145587 │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-145587                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-145587 │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │ 10 Jan 26 02:50 UTC │
	│ start   │ -p auto-989144 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-989144                       │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │ 10 Jan 26 02:51 UTC │
	│ ssh     │ -p auto-989144 pgrep -a kubelet                                                                                                                                                                                                               │ auto-989144                       │ jenkins │ v1.37.0 │ 10 Jan 26 02:51 UTC │ 10 Jan 26 02:51 UTC │
	│ image   │ default-k8s-diff-port-403885 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-403885      │ jenkins │ v1.37.0 │ 10 Jan 26 02:51 UTC │ 10 Jan 26 02:51 UTC │
	│ pause   │ -p default-k8s-diff-port-403885 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-403885      │ jenkins │ v1.37.0 │ 10 Jan 26 02:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:50:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:50:23.372009  233009 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:50:23.372185  233009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:50:23.372210  233009 out.go:374] Setting ErrFile to fd 2...
	I0110 02:50:23.372231  233009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:50:23.373715  233009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:50:23.374195  233009 out.go:368] Setting JSON to false
	I0110 02:50:23.374973  233009 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5573,"bootTime":1768007851,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:50:23.375039  233009 start.go:143] virtualization:  
	I0110 02:50:23.378020  233009 out.go:179] * [auto-989144] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:50:23.381958  233009 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:50:23.382035  233009 notify.go:221] Checking for updates...
	I0110 02:50:23.387763  233009 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:50:23.390637  233009 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:50:23.393437  233009 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:50:23.396125  233009 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:50:23.398973  233009 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:50:23.402357  233009 config.go:182] Loaded profile config "default-k8s-diff-port-403885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:50:23.402464  233009 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:50:23.436154  233009 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:50:23.436273  233009 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:50:23.540633  233009 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-10 02:50:23.531381029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:50:23.540732  233009 docker.go:319] overlay module found
	I0110 02:50:23.543871  233009 out.go:179] * Using the docker driver based on user configuration
	I0110 02:50:23.546696  233009 start.go:309] selected driver: docker
	I0110 02:50:23.546710  233009 start.go:928] validating driver "docker" against <nil>
	I0110 02:50:23.546723  233009 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:50:23.547420  233009 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:50:23.650203  233009 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-10 02:50:23.639466846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:50:23.650372  233009 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 02:50:23.650584  233009 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:50:23.653456  233009 out.go:179] * Using Docker driver with root privileges
	I0110 02:50:23.656327  233009 cni.go:84] Creating CNI manager for ""
	I0110 02:50:23.656384  233009 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:50:23.656395  233009 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 02:50:23.656465  233009 start.go:353] cluster config:
	{Name:auto-989144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-989144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s Rosetta:false}
	I0110 02:50:23.659502  233009 out.go:179] * Starting "auto-989144" primary control-plane node in "auto-989144" cluster
	I0110 02:50:23.662349  233009 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:50:23.665377  233009 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:50:23.668185  233009 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:50:23.668232  233009 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 02:50:23.668241  233009 cache.go:65] Caching tarball of preloaded images
	I0110 02:50:23.668323  233009 preload.go:251] Found /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 02:50:23.668340  233009 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:50:23.668439  233009 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/config.json ...
	I0110 02:50:23.668457  233009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/config.json: {Name:mk0723e954b06aa6ef935da4b050e021f40f239a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:23.668598  233009 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:50:23.688834  233009 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:50:23.688851  233009 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:50:23.688865  233009 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:50:23.688896  233009 start.go:360] acquireMachinesLock for auto-989144: {Name:mk0c135b81631369a38f59e1c5f17ccfdae85af7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:50:23.688991  233009 start.go:364] duration metric: took 79.382µs to acquireMachinesLock for "auto-989144"
	I0110 02:50:23.689018  233009 start.go:93] Provisioning new machine with config: &{Name:auto-989144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-989144 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:50:23.689076  233009 start.go:125] createHost starting for "" (driver="docker")
	I0110 02:50:22.302621  231933 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:50:22.302641  231933 machine.go:97] duration metric: took 4.672926946s to provisionDockerMachine
	I0110 02:50:22.302652  231933 start.go:293] postStartSetup for "default-k8s-diff-port-403885" (driver="docker")
	I0110 02:50:22.302663  231933 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:50:22.302722  231933 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:50:22.302818  231933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:50:22.342465  231933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:50:22.458157  231933 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:50:22.462035  231933 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:50:22.462064  231933 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:50:22.462075  231933 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:50:22.462128  231933 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:50:22.462211  231933 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:50:22.462333  231933 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:50:22.470591  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:50:22.489701  231933 start.go:296] duration metric: took 187.03564ms for postStartSetup
	I0110 02:50:22.489777  231933 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:50:22.489818  231933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:50:22.516545  231933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:50:22.627373  231933 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:50:22.633725  231933 fix.go:56] duration metric: took 5.447620162s for fixHost
	I0110 02:50:22.633758  231933 start.go:83] releasing machines lock for "default-k8s-diff-port-403885", held for 5.447667291s
	I0110 02:50:22.633823  231933 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-403885
	I0110 02:50:22.656691  231933 ssh_runner.go:195] Run: cat /version.json
	I0110 02:50:22.656742  231933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:50:22.656987  231933 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:50:22.657038  231933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:50:22.696944  231933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:50:22.707568  231933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:50:22.811430  231933 ssh_runner.go:195] Run: systemctl --version
	I0110 02:50:22.935076  231933 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:50:22.997268  231933 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:50:23.004470  231933 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:50:23.004550  231933 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:50:23.014132  231933 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:50:23.014153  231933 start.go:496] detecting cgroup driver to use...
	I0110 02:50:23.014184  231933 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:50:23.014247  231933 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:50:23.032426  231933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:50:23.051852  231933 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:50:23.051924  231933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:50:23.068669  231933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:50:23.082838  231933 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:50:23.219258  231933 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:50:23.369701  231933 docker.go:234] disabling docker service ...
	I0110 02:50:23.369786  231933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:50:23.388922  231933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:50:23.409066  231933 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:50:23.561096  231933 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:50:23.700318  231933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:50:23.714462  231933 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:50:23.736792  231933 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:50:23.736869  231933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:23.748771  231933 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:50:23.748840  231933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:23.759377  231933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:23.773558  231933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:23.793451  231933 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:50:23.804610  231933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:23.817418  231933 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:23.828824  231933 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:23.842570  231933 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:50:23.856006  231933 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:50:23.864115  231933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:50:23.999249  231933 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:50:24.357558  231933 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:50:24.357626  231933 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:50:24.362500  231933 start.go:574] Will wait 60s for crictl version
	I0110 02:50:24.362561  231933 ssh_runner.go:195] Run: which crictl
	I0110 02:50:24.366788  231933 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:50:24.409722  231933 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:50:24.409806  231933 ssh_runner.go:195] Run: crio --version
	I0110 02:50:24.450944  231933 ssh_runner.go:195] Run: crio --version
	I0110 02:50:24.504630  231933 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:50:24.507576  231933 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-403885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:50:24.534708  231933 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 02:50:24.538514  231933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:50:24.548455  231933 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-403885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-403885 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:50:24.548578  231933 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:50:24.548636  231933 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:50:24.602456  231933 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:50:24.602477  231933 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:50:24.602532  231933 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:50:24.653825  231933 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:50:24.653857  231933 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:50:24.653866  231933 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 crio true true} ...
	I0110 02:50:24.653981  231933 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-403885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-403885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:50:24.654091  231933 ssh_runner.go:195] Run: crio config
	I0110 02:50:24.743472  231933 cni.go:84] Creating CNI manager for ""
	I0110 02:50:24.743496  231933 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:50:24.743524  231933 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:50:24.743548  231933 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-403885 NodeName:default-k8s-diff-port-403885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:50:24.743677  231933 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-403885"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:50:24.743757  231933 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:50:24.755908  231933 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:50:24.755980  231933 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:50:24.772062  231933 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0110 02:50:24.804113  231933 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:50:24.818583  231933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I0110 02:50:24.833525  231933 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:50:24.837865  231933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:50:24.849935  231933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:50:25.021584  231933 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:50:25.040414  231933 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885 for IP: 192.168.85.2
	I0110 02:50:25.040438  231933 certs.go:195] generating shared ca certs ...
	I0110 02:50:25.040456  231933 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:25.040603  231933 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:50:25.040661  231933 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:50:25.040673  231933 certs.go:257] generating profile certs ...
	I0110 02:50:25.040763  231933 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.key
	I0110 02:50:25.040836  231933 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/apiserver.key.f53c6d08
	I0110 02:50:25.040883  231933 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/proxy-client.key
	I0110 02:50:25.041006  231933 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:50:25.041043  231933 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:50:25.041057  231933 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:50:25.041089  231933 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:50:25.041117  231933 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:50:25.041150  231933 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:50:25.041200  231933 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:50:25.041778  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:50:25.110027  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:50:25.153950  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:50:25.209707  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:50:25.279447  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0110 02:50:25.341608  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:50:25.375197  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:50:25.396332  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 02:50:25.415172  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:50:25.442401  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:50:25.475556  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:50:25.501102  231933 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:50:25.518992  231933 ssh_runner.go:195] Run: openssl version
	I0110 02:50:25.528842  231933 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:50:25.537896  231933 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:50:25.546652  231933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:50:25.551051  231933 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:50:25.551171  231933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:50:25.596004  231933 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:50:25.605363  231933 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:50:25.614097  231933 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:50:25.623265  231933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:50:25.631222  231933 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:50:25.631355  231933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:50:25.678253  231933 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:50:25.687422  231933 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:50:25.696236  231933 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:50:25.705170  231933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:50:25.709814  231933 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:50:25.709933  231933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:50:25.756522  231933 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:50:25.766725  231933 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:50:25.787086  231933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:50:25.888303  231933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:50:25.988096  231933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:50:26.082286  231933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:50:26.198308  231933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:50:26.377357  231933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:50:26.492703  231933 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-403885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-403885 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:50:26.492855  231933 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:50:26.492961  231933 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:50:26.586424  231933 cri.go:96] found id: "ccaef7514d5ac77efad7a1f91ebd93145b03e3462b1154eb3f723d4a112acdf6"
	I0110 02:50:26.586499  231933 cri.go:96] found id: "0a4524e2475eb21f6dbb36bb2158a508c31075979547ca9c30367ced5eab40f6"
	I0110 02:50:26.586533  231933 cri.go:96] found id: "73f1ff616118310e5ea35e9ab5496ff345a48d4dbb2a0c33eba742bc17a20097"
	I0110 02:50:26.586565  231933 cri.go:96] found id: "3eef2c483d9e9a9a8a3ab6bf3e8cf944fedbac4b9d208f8d1d0c7a3086f100ac"
	I0110 02:50:26.586584  231933 cri.go:96] found id: ""
	I0110 02:50:26.586668  231933 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 02:50:26.639418  231933 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:50:26Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:50:26.639575  231933 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:50:26.668217  231933 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:50:26.668297  231933 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:50:26.668394  231933 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:50:26.686454  231933 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:50:26.686959  231933 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-403885" does not appear in /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:50:26.687113  231933 kubeconfig.go:62] /home/jenkins/minikube-integration/22414-2353/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-403885" cluster setting kubeconfig missing "default-k8s-diff-port-403885" context setting]
	I0110 02:50:26.687476  231933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:26.689352  231933 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:50:26.705803  231933 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I0110 02:50:26.705885  231933 kubeadm.go:602] duration metric: took 37.568468ms to restartPrimaryControlPlane
	I0110 02:50:26.705918  231933 kubeadm.go:403] duration metric: took 213.223882ms to StartCluster
	I0110 02:50:26.705969  231933 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:26.706070  231933 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:50:26.706763  231933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:26.707041  231933 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:50:26.707519  231933 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:50:26.707595  231933 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-403885"
	I0110 02:50:26.707608  231933 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-403885"
	W0110 02:50:26.707614  231933 addons.go:248] addon storage-provisioner should already be in state true
	I0110 02:50:26.707635  231933 host.go:66] Checking if "default-k8s-diff-port-403885" exists ...
	I0110 02:50:26.709713  231933 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-403885 --format={{.State.Status}}
	I0110 02:50:26.709978  231933 config.go:182] Loaded profile config "default-k8s-diff-port-403885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:50:26.710073  231933 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-403885"
	I0110 02:50:26.710104  231933 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-403885"
	W0110 02:50:26.710138  231933 addons.go:248] addon dashboard should already be in state true
	I0110 02:50:26.710188  231933 host.go:66] Checking if "default-k8s-diff-port-403885" exists ...
	I0110 02:50:26.710711  231933 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-403885 --format={{.State.Status}}
	I0110 02:50:26.711104  231933 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-403885"
	I0110 02:50:26.711132  231933 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-403885"
	I0110 02:50:26.711396  231933 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-403885 --format={{.State.Status}}
	I0110 02:50:26.715584  231933 out.go:179] * Verifying Kubernetes components...
	I0110 02:50:26.723363  231933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:50:26.783964  231933 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:50:26.787608  231933 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:50:26.787632  231933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:50:26.787709  231933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:50:26.790296  231933 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-403885"
	W0110 02:50:26.790320  231933 addons.go:248] addon default-storageclass should already be in state true
	I0110 02:50:26.790345  231933 host.go:66] Checking if "default-k8s-diff-port-403885" exists ...
	I0110 02:50:26.790940  231933 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-403885 --format={{.State.Status}}
	I0110 02:50:26.791902  231933 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 02:50:26.799937  231933 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 02:50:26.819906  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 02:50:26.819934  231933 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 02:50:26.820017  231933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:50:26.825662  231933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:50:26.840449  231933 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:50:26.840471  231933 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:50:26.840540  231933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:50:26.871822  231933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:50:26.892930  231933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:50:23.692443  233009 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:50:23.692665  233009 start.go:159] libmachine.API.Create for "auto-989144" (driver="docker")
	I0110 02:50:23.692691  233009 client.go:173] LocalClient.Create starting
	I0110 02:50:23.692759  233009 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem
	I0110 02:50:23.692792  233009 main.go:144] libmachine: Decoding PEM data...
	I0110 02:50:23.692806  233009 main.go:144] libmachine: Parsing certificate...
	I0110 02:50:23.692858  233009 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem
	I0110 02:50:23.692874  233009 main.go:144] libmachine: Decoding PEM data...
	I0110 02:50:23.692885  233009 main.go:144] libmachine: Parsing certificate...
	I0110 02:50:23.693267  233009 cli_runner.go:164] Run: docker network inspect auto-989144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:50:23.712449  233009 cli_runner.go:211] docker network inspect auto-989144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:50:23.712518  233009 network_create.go:284] running [docker network inspect auto-989144] to gather additional debugging logs...
	I0110 02:50:23.712546  233009 cli_runner.go:164] Run: docker network inspect auto-989144
	W0110 02:50:23.730771  233009 cli_runner.go:211] docker network inspect auto-989144 returned with exit code 1
	I0110 02:50:23.730819  233009 network_create.go:287] error running [docker network inspect auto-989144]: docker network inspect auto-989144: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-989144 not found
	I0110 02:50:23.730832  233009 network_create.go:289] output of [docker network inspect auto-989144]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-989144 not found
	
	** /stderr **
	I0110 02:50:23.730923  233009 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:50:23.747591  233009 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dba69832168e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:28:cf:b8:5c:6c} reservation:<nil>}
	I0110 02:50:23.747982  233009 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1bcca9209747 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d2:8b:83:a4:43:ae} reservation:<nil>}
	I0110 02:50:23.748295  233009 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-383ddfacc8f8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:26:ff:fb:97:66:af} reservation:<nil>}
	I0110 02:50:23.749561  233009 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2abe0}
	I0110 02:50:23.749587  233009 network_create.go:124] attempt to create docker network auto-989144 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0110 02:50:23.749643  233009 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-989144 auto-989144
	I0110 02:50:23.838431  233009 network_create.go:108] docker network auto-989144 192.168.76.0/24 created
	I0110 02:50:23.838460  233009 kic.go:121] calculated static IP "192.168.76.2" for the "auto-989144" container
	I0110 02:50:23.838535  233009 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:50:23.855728  233009 cli_runner.go:164] Run: docker volume create auto-989144 --label name.minikube.sigs.k8s.io=auto-989144 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:50:23.881245  233009 oci.go:103] Successfully created a docker volume auto-989144
	I0110 02:50:23.881332  233009 cli_runner.go:164] Run: docker run --rm --name auto-989144-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-989144 --entrypoint /usr/bin/test -v auto-989144:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:50:24.714953  233009 oci.go:107] Successfully prepared a docker volume auto-989144
	I0110 02:50:24.715023  233009 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:50:24.715042  233009 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:50:24.715118  233009 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-989144:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:50:27.194825  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 02:50:27.194852  231933 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 02:50:27.256561  231933 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:50:27.277405  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 02:50:27.277429  231933 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 02:50:27.292312  231933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:50:27.331152  231933 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-403885" to be "Ready" ...
	I0110 02:50:27.337064  231933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:50:27.349338  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 02:50:27.349364  231933 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 02:50:27.451462  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 02:50:27.451485  231933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 02:50:27.565540  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 02:50:27.565620  231933 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 02:50:27.670699  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 02:50:27.670778  231933 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 02:50:27.760468  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 02:50:27.760542  231933 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 02:50:27.841848  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 02:50:27.841928  231933 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 02:50:27.909237  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:50:27.909262  231933 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 02:50:27.941802  231933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:50:31.225373  231933 node_ready.go:49] node "default-k8s-diff-port-403885" is "Ready"
	I0110 02:50:31.225405  231933 node_ready.go:38] duration metric: took 3.89421938s for node "default-k8s-diff-port-403885" to be "Ready" ...
	I0110 02:50:31.225422  231933 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:50:31.225486  231933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:50:32.671326  231933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.378977361s)
	I0110 02:50:32.671381  231933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.33429348s)
	I0110 02:50:32.671628  231933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.729781586s)
	I0110 02:50:32.671882  231933 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.446379139s)
	I0110 02:50:32.671935  231933 api_server.go:72] duration metric: took 5.964813079s to wait for apiserver process to appear ...
	I0110 02:50:32.671955  231933 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:50:32.672001  231933 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0110 02:50:32.675491  231933 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-403885 addons enable metrics-server
	
	I0110 02:50:32.705472  231933 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 02:50:32.705497  231933 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 02:50:32.736801  231933 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 02:50:29.658710  233009 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-989144:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (4.943552073s)
	I0110 02:50:29.658740  233009 kic.go:203] duration metric: took 4.943695348s to extract preloaded images to volume ...
	W0110 02:50:29.658888  233009 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 02:50:29.659019  233009 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:50:29.746517  233009 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-989144 --name auto-989144 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-989144 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-989144 --network auto-989144 --ip 192.168.76.2 --volume auto-989144:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 02:50:30.114885  233009 cli_runner.go:164] Run: docker container inspect auto-989144 --format={{.State.Running}}
	I0110 02:50:30.139536  233009 cli_runner.go:164] Run: docker container inspect auto-989144 --format={{.State.Status}}
	I0110 02:50:30.162968  233009 cli_runner.go:164] Run: docker exec auto-989144 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:50:30.235947  233009 oci.go:144] the created container "auto-989144" has a running status.
	I0110 02:50:30.235974  233009 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/auto-989144/id_rsa...
	I0110 02:50:30.338462  233009 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-2353/.minikube/machines/auto-989144/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:50:30.377563  233009 cli_runner.go:164] Run: docker container inspect auto-989144 --format={{.State.Status}}
	I0110 02:50:30.399396  233009 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:50:30.399415  233009 kic_runner.go:114] Args: [docker exec --privileged auto-989144 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:50:30.452116  233009 cli_runner.go:164] Run: docker container inspect auto-989144 --format={{.State.Status}}
	I0110 02:50:30.478883  233009 machine.go:94] provisionDockerMachine start ...
	I0110 02:50:30.478964  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:30.515595  233009 main.go:144] libmachine: Using SSH client type: native
	I0110 02:50:30.516004  233009 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0110 02:50:30.516015  233009 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:50:30.516583  233009 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46636->127.0.0.1:33098: read: connection reset by peer
	I0110 02:50:32.739710  231933 addons.go:530] duration metric: took 6.032188449s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 02:50:33.172980  231933 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0110 02:50:33.181708  231933 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0110 02:50:33.185361  231933 api_server.go:141] control plane version: v1.35.0
	I0110 02:50:33.185387  231933 api_server.go:131] duration metric: took 513.412882ms to wait for apiserver health ...
	I0110 02:50:33.185397  231933 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:50:33.192607  231933 system_pods.go:59] 8 kube-system pods found
	I0110 02:50:33.192645  231933 system_pods.go:61] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:50:33.192655  231933 system_pods.go:61] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:50:33.192664  231933 system_pods.go:61] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 02:50:33.192671  231933 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:50:33.192681  231933 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:50:33.192688  231933 system_pods.go:61] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 02:50:33.192695  231933 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:50:33.192704  231933 system_pods.go:61] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:50:33.192716  231933 system_pods.go:74] duration metric: took 7.314178ms to wait for pod list to return data ...
	I0110 02:50:33.192730  231933 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:50:33.198163  231933 default_sa.go:45] found service account: "default"
	I0110 02:50:33.198191  231933 default_sa.go:55] duration metric: took 5.455011ms for default service account to be created ...
	I0110 02:50:33.198202  231933 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:50:33.202890  231933 system_pods.go:86] 8 kube-system pods found
	I0110 02:50:33.202925  231933 system_pods.go:89] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:50:33.202936  231933 system_pods.go:89] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:50:33.202945  231933 system_pods.go:89] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 02:50:33.202955  231933 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:50:33.202963  231933 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:50:33.202974  231933 system_pods.go:89] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 02:50:33.202983  231933 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:50:33.202992  231933 system_pods.go:89] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:50:33.202999  231933 system_pods.go:126] duration metric: took 4.792004ms to wait for k8s-apps to be running ...
	I0110 02:50:33.203012  231933 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:50:33.203070  231933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:50:33.221022  231933 system_svc.go:56] duration metric: took 18.000698ms WaitForService to wait for kubelet
	I0110 02:50:33.221053  231933 kubeadm.go:587] duration metric: took 6.513939316s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:50:33.221073  231933 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:50:33.226202  231933 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:50:33.226229  231933 node_conditions.go:123] node cpu capacity is 2
	I0110 02:50:33.226251  231933 node_conditions.go:105] duration metric: took 5.172852ms to run NodePressure ...
	I0110 02:50:33.226265  231933 start.go:242] waiting for startup goroutines ...
	I0110 02:50:33.226276  231933 start.go:247] waiting for cluster config update ...
	I0110 02:50:33.226294  231933 start.go:256] writing updated cluster config ...
	I0110 02:50:33.226559  231933 ssh_runner.go:195] Run: rm -f paused
	I0110 02:50:33.230214  231933 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:50:33.235040  231933 pod_ready.go:83] waiting for pod "coredns-7d764666f9-sck2c" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 02:50:35.263767  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	I0110 02:50:33.669237  233009 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-989144
	
	I0110 02:50:33.669262  233009 ubuntu.go:182] provisioning hostname "auto-989144"
	I0110 02:50:33.669357  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:33.687518  233009 main.go:144] libmachine: Using SSH client type: native
	I0110 02:50:33.688030  233009 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0110 02:50:33.688046  233009 main.go:144] libmachine: About to run SSH command:
	sudo hostname auto-989144 && echo "auto-989144" | sudo tee /etc/hostname
	I0110 02:50:33.853127  233009 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-989144
	
	I0110 02:50:33.853225  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:33.872900  233009 main.go:144] libmachine: Using SSH client type: native
	I0110 02:50:33.873221  233009 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0110 02:50:33.873243  233009 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-989144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-989144/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-989144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:50:34.019976  233009 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:50:34.019999  233009 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 02:50:34.020032  233009 ubuntu.go:190] setting up certificates
	I0110 02:50:34.020042  233009 provision.go:84] configureAuth start
	I0110 02:50:34.020098  233009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-989144
	I0110 02:50:34.039297  233009 provision.go:143] copyHostCerts
	I0110 02:50:34.039362  233009 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem, removing ...
	I0110 02:50:34.039371  233009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:50:34.039465  233009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 02:50:34.039562  233009 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem, removing ...
	I0110 02:50:34.039568  233009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:50:34.039593  233009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 02:50:34.039643  233009 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem, removing ...
	I0110 02:50:34.039647  233009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:50:34.039670  233009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 02:50:34.039717  233009 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.auto-989144 san=[127.0.0.1 192.168.76.2 auto-989144 localhost minikube]
	I0110 02:50:34.373512  233009 provision.go:177] copyRemoteCerts
	I0110 02:50:34.373631  233009 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:50:34.373703  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:34.391258  233009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/auto-989144/id_rsa Username:docker}
	I0110 02:50:34.500147  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:50:34.528285  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0110 02:50:34.548743  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:50:34.569326  233009 provision.go:87] duration metric: took 549.271375ms to configureAuth
	I0110 02:50:34.569358  233009 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:50:34.569601  233009 config.go:182] Loaded profile config "auto-989144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:50:34.569750  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:34.587522  233009 main.go:144] libmachine: Using SSH client type: native
	I0110 02:50:34.587861  233009 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0110 02:50:34.587882  233009 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:50:34.932347  233009 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:50:34.932376  233009 machine.go:97] duration metric: took 4.453474458s to provisionDockerMachine
	I0110 02:50:34.932388  233009 client.go:176] duration metric: took 11.239690885s to LocalClient.Create
	I0110 02:50:34.932403  233009 start.go:167] duration metric: took 11.239738342s to libmachine.API.Create "auto-989144"
	I0110 02:50:34.932410  233009 start.go:293] postStartSetup for "auto-989144" (driver="docker")
	I0110 02:50:34.932420  233009 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:50:34.932509  233009 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:50:34.932598  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:34.966618  233009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/auto-989144/id_rsa Username:docker}
	I0110 02:50:35.086372  233009 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:50:35.090898  233009 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:50:35.090939  233009 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:50:35.090950  233009 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:50:35.091006  233009 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:50:35.091091  233009 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:50:35.091193  233009 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:50:35.102664  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:50:35.123919  233009 start.go:296] duration metric: took 191.484423ms for postStartSetup
	I0110 02:50:35.124335  233009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-989144
	I0110 02:50:35.148100  233009 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/config.json ...
	I0110 02:50:35.148377  233009 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:50:35.148427  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:35.176519  233009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/auto-989144/id_rsa Username:docker}
	I0110 02:50:35.285543  233009 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:50:35.294082  233009 start.go:128] duration metric: took 11.604993259s to createHost
	I0110 02:50:35.294122  233009 start.go:83] releasing machines lock for "auto-989144", held for 11.605112369s
	I0110 02:50:35.294204  233009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-989144
	I0110 02:50:35.313631  233009 ssh_runner.go:195] Run: cat /version.json
	I0110 02:50:35.313798  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:35.313739  233009 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:50:35.314095  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:35.338547  233009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/auto-989144/id_rsa Username:docker}
	I0110 02:50:35.361257  233009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/auto-989144/id_rsa Username:docker}
	I0110 02:50:35.572228  233009 ssh_runner.go:195] Run: systemctl --version
	I0110 02:50:35.578678  233009 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:50:35.619328  233009 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:50:35.623756  233009 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:50:35.623912  233009 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:50:35.653664  233009 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 02:50:35.653691  233009 start.go:496] detecting cgroup driver to use...
	I0110 02:50:35.653750  233009 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:50:35.653832  233009 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:50:35.671296  233009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:50:35.684072  233009 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:50:35.684151  233009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:50:35.703096  233009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:50:35.722524  233009 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:50:35.879877  233009 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:50:36.015083  233009 docker.go:234] disabling docker service ...
	I0110 02:50:36.015151  233009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:50:36.041658  233009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:50:36.055624  233009 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:50:36.182491  233009 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:50:36.317948  233009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:50:36.335474  233009 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:50:36.366936  233009 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:50:36.367050  233009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:36.376961  233009 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:50:36.377080  233009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:36.389720  233009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:36.410086  233009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:36.428539  233009 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:50:36.439095  233009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:36.449651  233009 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:36.472483  233009 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:36.483483  233009 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:50:36.494953  233009 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:50:36.503129  233009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:50:36.671859  233009 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:50:36.930191  233009 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:50:36.930345  233009 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:50:36.935502  233009 start.go:574] Will wait 60s for crictl version
	I0110 02:50:36.935609  233009 ssh_runner.go:195] Run: which crictl
	I0110 02:50:36.940917  233009 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:50:36.977719  233009 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:50:36.977882  233009 ssh_runner.go:195] Run: crio --version
	I0110 02:50:37.020953  233009 ssh_runner.go:195] Run: crio --version
	I0110 02:50:37.063895  233009 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:50:37.066886  233009 cli_runner.go:164] Run: docker network inspect auto-989144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:50:37.094081  233009 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:50:37.103414  233009 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:50:37.117401  233009 kubeadm.go:884] updating cluster {Name:auto-989144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-989144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:50:37.117518  233009 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:50:37.117570  233009 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:50:37.172336  233009 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:50:37.172357  233009 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:50:37.172421  233009 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:50:37.213419  233009 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:50:37.213440  233009 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:50:37.213448  233009 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 02:50:37.213536  233009 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-989144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:auto-989144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:50:37.213610  233009 ssh_runner.go:195] Run: crio config
	I0110 02:50:37.293430  233009 cni.go:84] Creating CNI manager for ""
	I0110 02:50:37.293499  233009 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:50:37.293533  233009 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:50:37.293586  233009 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-989144 NodeName:auto-989144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:50:37.293744  233009 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-989144"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:50:37.293853  233009 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:50:37.303978  233009 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:50:37.304093  233009 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:50:37.311585  233009 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0110 02:50:37.325816  233009 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:50:37.339175  233009 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0110 02:50:37.352531  233009 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:50:37.356597  233009 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:50:37.367097  233009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:50:37.541126  233009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:50:37.559156  233009 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144 for IP: 192.168.76.2
	I0110 02:50:37.559225  233009 certs.go:195] generating shared ca certs ...
	I0110 02:50:37.559256  233009 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:37.559443  233009 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:50:37.559520  233009 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:50:37.559555  233009 certs.go:257] generating profile certs ...
	I0110 02:50:37.559635  233009 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.key
	I0110 02:50:37.559673  233009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.crt with IP's: []
	I0110 02:50:37.700016  233009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.crt ...
	I0110 02:50:37.700088  233009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.crt: {Name:mk0a9f6799306a45f75bc2d4088c8485af031457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:37.700535  233009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.key ...
	I0110 02:50:37.700572  233009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.key: {Name:mkf40f0c89aa73969267378f93f8c575543af9ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:37.700731  233009 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.key.f5155a44
	I0110 02:50:37.700770  233009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.crt.f5155a44 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 02:50:37.850411  233009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.crt.f5155a44 ...
	I0110 02:50:37.850501  233009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.crt.f5155a44: {Name:mkbd845ee331e9f8d1247393de0522d1df7142cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:37.850696  233009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.key.f5155a44 ...
	I0110 02:50:37.850735  233009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.key.f5155a44: {Name:mk82f002d186601567c317dd3ff1c5384ef7f9c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:37.850872  233009 certs.go:382] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.crt.f5155a44 -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.crt
	I0110 02:50:37.851006  233009 certs.go:386] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.key.f5155a44 -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.key
	I0110 02:50:37.851095  233009 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/proxy-client.key
	I0110 02:50:37.851145  233009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/proxy-client.crt with IP's: []
	I0110 02:50:38.217239  233009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/proxy-client.crt ...
	I0110 02:50:38.217350  233009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/proxy-client.crt: {Name:mk8ea00254b59891f4ff96a5b6d421200881489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:38.217555  233009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/proxy-client.key ...
	I0110 02:50:38.217588  233009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/proxy-client.key: {Name:mke62927631ff9bf11265f49d98f7f1566e865a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:38.217820  233009 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:50:38.217884  233009 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:50:38.217908  233009 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:50:38.217967  233009 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:50:38.218022  233009 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:50:38.218082  233009 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:50:38.218153  233009 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:50:38.218762  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:50:38.240774  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:50:38.257427  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:50:38.275230  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:50:38.292459  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0110 02:50:38.310971  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 02:50:38.328327  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:50:38.344613  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 02:50:38.361182  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:50:38.379046  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:50:38.396303  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:50:38.413024  233009 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:50:38.437826  233009 ssh_runner.go:195] Run: openssl version
	I0110 02:50:38.448439  233009 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:50:38.464397  233009 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:50:38.478014  233009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:50:38.482644  233009 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:50:38.482783  233009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:50:38.550460  233009 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:50:38.559244  233009 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41682.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:50:38.567372  233009 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:50:38.575096  233009 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:50:38.583096  233009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:50:38.587376  233009 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:50:38.587512  233009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:50:38.632036  233009 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:50:38.640470  233009 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:50:38.648296  233009 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:50:38.655999  233009 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:50:38.663977  233009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:50:38.668099  233009 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:50:38.668211  233009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:50:38.711597  233009 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:50:38.720059  233009 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4168.pem /etc/ssl/certs/51391683.0
	I0110 02:50:38.728362  233009 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:50:38.732733  233009 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:50:38.732832  233009 kubeadm.go:401] StartCluster: {Name:auto-989144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-989144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:50:38.732946  233009 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:50:38.733038  233009 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:50:38.763561  233009 cri.go:96] found id: ""
	I0110 02:50:38.763669  233009 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:50:38.798195  233009 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:50:38.809350  233009 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:50:38.809409  233009 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:50:38.824285  233009 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:50:38.824302  233009 kubeadm.go:158] found existing configuration files:
	
	I0110 02:50:38.824353  233009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:50:38.836812  233009 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:50:38.836962  233009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:50:38.846963  233009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:50:38.858702  233009 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:50:38.858806  233009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:50:38.868944  233009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:50:38.878794  233009 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:50:38.878948  233009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:50:38.887137  233009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:50:38.896216  233009 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:50:38.896368  233009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:50:38.905714  233009 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:50:38.953342  233009 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:50:38.953983  233009 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:50:39.048375  233009 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:50:39.048493  233009 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:50:39.048592  233009 kubeadm.go:319] OS: Linux
	I0110 02:50:39.048693  233009 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:50:39.048761  233009 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:50:39.048829  233009 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:50:39.048908  233009 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:50:39.048980  233009 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:50:39.049073  233009 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:50:39.049154  233009 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:50:39.049210  233009 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:50:39.049263  233009 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:50:39.127427  233009 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:50:39.127611  233009 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:50:39.127743  233009 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:50:39.141871  233009 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W0110 02:50:37.742994  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	W0110 02:50:40.241180  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	I0110 02:50:39.151113  233009 out.go:252]   - Generating certificates and keys ...
	I0110 02:50:39.151282  233009 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:50:39.151397  233009 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:50:39.237547  233009 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:50:39.719322  233009 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:50:40.210638  233009 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 02:50:40.309467  233009 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 02:50:40.573329  233009 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 02:50:40.573888  233009 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-989144 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:50:41.113968  233009 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 02:50:41.114540  233009 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-989144 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:50:41.549824  233009 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 02:50:41.994139  233009 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 02:50:42.257565  233009 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 02:50:42.258218  233009 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:50:42.404806  233009 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:50:42.665855  233009 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:50:43.316537  233009 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:50:44.154938  233009 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:50:44.309092  233009 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:50:44.309191  233009 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:50:44.313069  233009 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W0110 02:50:42.246121  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	W0110 02:50:44.740965  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	W0110 02:50:46.741115  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	I0110 02:50:44.316448  233009 out.go:252]   - Booting up control plane ...
	I0110 02:50:44.316549  233009 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:50:44.316626  233009 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:50:44.316694  233009 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:50:44.341787  233009 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:50:44.341920  233009 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:50:44.355565  233009 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:50:44.355670  233009 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:50:44.355716  233009 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:50:44.509113  233009 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:50:44.509290  233009 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:50:45.512582  233009 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.003422607s
	I0110 02:50:45.521304  233009 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 02:50:45.522867  233009 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0110 02:50:45.523483  233009 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 02:50:45.524144  233009 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W0110 02:50:49.242356  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	W0110 02:50:51.741078  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	I0110 02:50:49.048301  233009 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.52403287s
	I0110 02:50:50.691148  233009 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.166941147s
	I0110 02:50:52.525451  233009 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001733004s
	I0110 02:50:52.576585  233009 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 02:50:52.600245  233009 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 02:50:52.616523  233009 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 02:50:52.617526  233009 kubeadm.go:319] [mark-control-plane] Marking the node auto-989144 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 02:50:52.639765  233009 kubeadm.go:319] [bootstrap-token] Using token: qnk1al.g7fjbz0nbykrrgx8
	I0110 02:50:52.642792  233009 out.go:252]   - Configuring RBAC rules ...
	I0110 02:50:52.642923  233009 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 02:50:52.651900  233009 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 02:50:52.662839  233009 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 02:50:52.669844  233009 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 02:50:52.683695  233009 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 02:50:52.688867  233009 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 02:50:52.942521  233009 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 02:50:53.410352  233009 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 02:50:53.942334  233009 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 02:50:53.944763  233009 kubeadm.go:319] 
	I0110 02:50:53.944842  233009 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 02:50:53.944847  233009 kubeadm.go:319] 
	I0110 02:50:53.944936  233009 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 02:50:53.944940  233009 kubeadm.go:319] 
	I0110 02:50:53.944965  233009 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 02:50:53.945024  233009 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 02:50:53.945075  233009 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 02:50:53.945079  233009 kubeadm.go:319] 
	I0110 02:50:53.945133  233009 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 02:50:53.945137  233009 kubeadm.go:319] 
	I0110 02:50:53.945190  233009 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 02:50:53.945195  233009 kubeadm.go:319] 
	I0110 02:50:53.945246  233009 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 02:50:53.945321  233009 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 02:50:53.945389  233009 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 02:50:53.945394  233009 kubeadm.go:319] 
	I0110 02:50:53.945477  233009 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 02:50:53.945557  233009 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 02:50:53.945563  233009 kubeadm.go:319] 
	I0110 02:50:53.945648  233009 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qnk1al.g7fjbz0nbykrrgx8 \
	I0110 02:50:53.945751  233009 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e531c0ad449fbe8263f118458e677222d5af5abf33ad8c77cbc1152855f23f5 \
	I0110 02:50:53.945771  233009 kubeadm.go:319] 	--control-plane 
	I0110 02:50:53.945787  233009 kubeadm.go:319] 
	I0110 02:50:53.945873  233009 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 02:50:53.945877  233009 kubeadm.go:319] 
	I0110 02:50:53.945959  233009 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qnk1al.g7fjbz0nbykrrgx8 \
	I0110 02:50:53.946061  233009 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e531c0ad449fbe8263f118458e677222d5af5abf33ad8c77cbc1152855f23f5 
	I0110 02:50:53.949038  233009 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:50:53.949488  233009 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:50:53.949621  233009 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:50:53.949666  233009 cni.go:84] Creating CNI manager for ""
	I0110 02:50:53.949679  233009 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:50:53.954876  233009 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W0110 02:50:54.241007  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	W0110 02:50:56.740300  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	I0110 02:50:53.957892  233009 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 02:50:53.963976  233009 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 02:50:53.963999  233009 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 02:50:53.985973  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 02:50:54.695080  233009 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 02:50:54.695211  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:50:54.695293  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-989144 minikube.k8s.io/updated_at=2026_01_10T02_50_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510 minikube.k8s.io/name=auto-989144 minikube.k8s.io/primary=true
	I0110 02:50:54.877104  233009 ops.go:34] apiserver oom_adj: -16
	I0110 02:50:54.877230  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:50:55.378240  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:50:55.878023  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:50:56.378336  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:50:56.878301  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:50:57.378140  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:50:57.877894  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:50:58.377461  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:50:58.527279  233009 kubeadm.go:1114] duration metric: took 3.832106628s to wait for elevateKubeSystemPrivileges
	I0110 02:50:58.527326  233009 kubeadm.go:403] duration metric: took 19.794495871s to StartCluster
	I0110 02:50:58.527345  233009 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:58.527438  233009 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:50:58.528547  233009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:58.530478  233009 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 02:50:58.530807  233009 config.go:182] Loaded profile config "auto-989144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:50:58.531040  233009 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:50:58.531105  233009 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:50:58.531311  233009 addons.go:70] Setting storage-provisioner=true in profile "auto-989144"
	I0110 02:50:58.531344  233009 addons.go:239] Setting addon storage-provisioner=true in "auto-989144"
	I0110 02:50:58.531370  233009 host.go:66] Checking if "auto-989144" exists ...
	I0110 02:50:58.531893  233009 cli_runner.go:164] Run: docker container inspect auto-989144 --format={{.State.Status}}
	I0110 02:50:58.532249  233009 addons.go:70] Setting default-storageclass=true in profile "auto-989144"
	I0110 02:50:58.532274  233009 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-989144"
	I0110 02:50:58.532608  233009 cli_runner.go:164] Run: docker container inspect auto-989144 --format={{.State.Status}}
	I0110 02:50:58.535197  233009 out.go:179] * Verifying Kubernetes components...
	I0110 02:50:58.538946  233009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:50:58.572034  233009 addons.go:239] Setting addon default-storageclass=true in "auto-989144"
	I0110 02:50:58.572077  233009 host.go:66] Checking if "auto-989144" exists ...
	I0110 02:50:58.572596  233009 cli_runner.go:164] Run: docker container inspect auto-989144 --format={{.State.Status}}
	I0110 02:50:58.584970  233009 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:50:58.587910  233009 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:50:58.587933  233009 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:50:58.587996  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:58.610751  233009 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:50:58.610772  233009 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:50:58.610849  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:58.641590  233009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/auto-989144/id_rsa Username:docker}
	I0110 02:50:58.664031  233009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/auto-989144/id_rsa Username:docker}
	I0110 02:50:58.896648  233009 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:50:58.956224  233009 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 02:50:58.956399  233009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:50:58.985883  233009 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:50:59.733063  233009 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0110 02:50:59.734162  233009 node_ready.go:35] waiting up to 15m0s for node "auto-989144" to be "Ready" ...
	I0110 02:50:59.802412  233009 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W0110 02:50:58.743154  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	W0110 02:51:01.244052  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	I0110 02:50:59.805204  233009 addons.go:530] duration metric: took 1.274091736s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0110 02:51:00.247559  233009 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-989144" context rescaled to 1 replicas
	W0110 02:51:01.741664  233009 node_ready.go:57] node "auto-989144" has "Ready":"False" status (will retry)
	W0110 02:51:03.742138  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	I0110 02:51:05.741594  231933 pod_ready.go:94] pod "coredns-7d764666f9-sck2c" is "Ready"
	I0110 02:51:05.741625  231933 pod_ready.go:86] duration metric: took 32.50655955s for pod "coredns-7d764666f9-sck2c" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:05.744399  231933 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:05.750171  231933 pod_ready.go:94] pod "etcd-default-k8s-diff-port-403885" is "Ready"
	I0110 02:51:05.750195  231933 pod_ready.go:86] duration metric: took 5.768233ms for pod "etcd-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:05.752470  231933 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:05.757238  231933 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-403885" is "Ready"
	I0110 02:51:05.757261  231933 pod_ready.go:86] duration metric: took 4.76712ms for pod "kube-apiserver-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:05.759316  231933 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:05.938558  231933 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-403885" is "Ready"
	I0110 02:51:05.938586  231933 pod_ready.go:86] duration metric: took 179.251344ms for pod "kube-controller-manager-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:06.138739  231933 pod_ready.go:83] waiting for pod "kube-proxy-ss9fs" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:06.538736  231933 pod_ready.go:94] pod "kube-proxy-ss9fs" is "Ready"
	I0110 02:51:06.538760  231933 pod_ready.go:86] duration metric: took 399.996046ms for pod "kube-proxy-ss9fs" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:06.739273  231933 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:07.138506  231933 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-403885" is "Ready"
	I0110 02:51:07.138532  231933 pod_ready.go:86] duration metric: took 399.230955ms for pod "kube-scheduler-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:07.138545  231933 pod_ready.go:40] duration metric: took 33.908287639s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:51:07.198377  231933 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 02:51:07.201478  231933 out.go:203] 
	W0110 02:51:07.204318  231933 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 02:51:07.207229  231933 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:51:07.210084  231933 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-403885" cluster and "default" namespace by default
	W0110 02:51:04.237909  233009 node_ready.go:57] node "auto-989144" has "Ready":"False" status (will retry)
	W0110 02:51:06.739702  233009 node_ready.go:57] node "auto-989144" has "Ready":"False" status (will retry)
	W0110 02:51:09.237549  233009 node_ready.go:57] node "auto-989144" has "Ready":"False" status (will retry)
	W0110 02:51:11.737555  233009 node_ready.go:57] node "auto-989144" has "Ready":"False" status (will retry)
	I0110 02:51:12.742381  233009 node_ready.go:49] node "auto-989144" is "Ready"
	I0110 02:51:12.742409  233009 node_ready.go:38] duration metric: took 13.00818533s for node "auto-989144" to be "Ready" ...
	I0110 02:51:12.742428  233009 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:51:12.742480  233009 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:51:12.755022  233009 api_server.go:72] duration metric: took 14.22381075s to wait for apiserver process to appear ...
	I0110 02:51:12.755046  233009 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:51:12.755064  233009 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:51:12.764677  233009 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:51:12.766870  233009 api_server.go:141] control plane version: v1.35.0
	I0110 02:51:12.766941  233009 api_server.go:131] duration metric: took 11.888376ms to wait for apiserver health ...
	I0110 02:51:12.766965  233009 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:51:12.771220  233009 system_pods.go:59] 8 kube-system pods found
	I0110 02:51:12.771300  233009 system_pods.go:61] "coredns-7d764666f9-n982k" [877f2d3b-23c5-49a4-9a27-adab42fe2451] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:51:12.771322  233009 system_pods.go:61] "etcd-auto-989144" [e2f0ccc2-a620-4ebd-8852-0e2022be1ce6] Running
	I0110 02:51:12.771362  233009 system_pods.go:61] "kindnet-ktmk2" [962be512-25e1-4c67-a7b5-f21dac1ac303] Running
	I0110 02:51:12.771383  233009 system_pods.go:61] "kube-apiserver-auto-989144" [9dce557d-11a9-406e-a89f-45347e73cfa8] Running
	I0110 02:51:12.771404  233009 system_pods.go:61] "kube-controller-manager-auto-989144" [2744eb18-2b2a-4bbb-b271-07eaccb0f0fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:51:12.771426  233009 system_pods.go:61] "kube-proxy-l9j6v" [3905bb5b-03f0-4eac-8478-4a972acad6cb] Running
	I0110 02:51:12.771463  233009 system_pods.go:61] "kube-scheduler-auto-989144" [0a34810d-f7ec-4d9c-8a8c-dcb5ae67683c] Running
	I0110 02:51:12.771492  233009 system_pods.go:61] "storage-provisioner" [c0929b7c-b8da-4b7d-b8e9-ccfe9e1bc5a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:51:12.771524  233009 system_pods.go:74] duration metric: took 4.540394ms to wait for pod list to return data ...
	I0110 02:51:12.771552  233009 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:51:12.778277  233009 default_sa.go:45] found service account: "default"
	I0110 02:51:12.778359  233009 default_sa.go:55] duration metric: took 6.787472ms for default service account to be created ...
	I0110 02:51:12.778384  233009 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:51:12.784214  233009 system_pods.go:86] 8 kube-system pods found
	I0110 02:51:12.784294  233009 system_pods.go:89] "coredns-7d764666f9-n982k" [877f2d3b-23c5-49a4-9a27-adab42fe2451] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:51:12.784320  233009 system_pods.go:89] "etcd-auto-989144" [e2f0ccc2-a620-4ebd-8852-0e2022be1ce6] Running
	I0110 02:51:12.784356  233009 system_pods.go:89] "kindnet-ktmk2" [962be512-25e1-4c67-a7b5-f21dac1ac303] Running
	I0110 02:51:12.784379  233009 system_pods.go:89] "kube-apiserver-auto-989144" [9dce557d-11a9-406e-a89f-45347e73cfa8] Running
	I0110 02:51:12.784405  233009 system_pods.go:89] "kube-controller-manager-auto-989144" [2744eb18-2b2a-4bbb-b271-07eaccb0f0fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:51:12.784441  233009 system_pods.go:89] "kube-proxy-l9j6v" [3905bb5b-03f0-4eac-8478-4a972acad6cb] Running
	I0110 02:51:12.784466  233009 system_pods.go:89] "kube-scheduler-auto-989144" [0a34810d-f7ec-4d9c-8a8c-dcb5ae67683c] Running
	I0110 02:51:12.784500  233009 system_pods.go:89] "storage-provisioner" [c0929b7c-b8da-4b7d-b8e9-ccfe9e1bc5a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:51:12.784563  233009 retry.go:84] will retry after 300ms: missing components: kube-dns
	I0110 02:51:13.085974  233009 system_pods.go:86] 8 kube-system pods found
	I0110 02:51:13.086010  233009 system_pods.go:89] "coredns-7d764666f9-n982k" [877f2d3b-23c5-49a4-9a27-adab42fe2451] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:51:13.086017  233009 system_pods.go:89] "etcd-auto-989144" [e2f0ccc2-a620-4ebd-8852-0e2022be1ce6] Running
	I0110 02:51:13.086024  233009 system_pods.go:89] "kindnet-ktmk2" [962be512-25e1-4c67-a7b5-f21dac1ac303] Running
	I0110 02:51:13.086071  233009 system_pods.go:89] "kube-apiserver-auto-989144" [9dce557d-11a9-406e-a89f-45347e73cfa8] Running
	I0110 02:51:13.086079  233009 system_pods.go:89] "kube-controller-manager-auto-989144" [2744eb18-2b2a-4bbb-b271-07eaccb0f0fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:51:13.086084  233009 system_pods.go:89] "kube-proxy-l9j6v" [3905bb5b-03f0-4eac-8478-4a972acad6cb] Running
	I0110 02:51:13.086093  233009 system_pods.go:89] "kube-scheduler-auto-989144" [0a34810d-f7ec-4d9c-8a8c-dcb5ae67683c] Running
	I0110 02:51:13.086100  233009 system_pods.go:89] "storage-provisioner" [c0929b7c-b8da-4b7d-b8e9-ccfe9e1bc5a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:51:13.341772  233009 system_pods.go:86] 8 kube-system pods found
	I0110 02:51:13.341804  233009 system_pods.go:89] "coredns-7d764666f9-n982k" [877f2d3b-23c5-49a4-9a27-adab42fe2451] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:51:13.341833  233009 system_pods.go:89] "etcd-auto-989144" [e2f0ccc2-a620-4ebd-8852-0e2022be1ce6] Running
	I0110 02:51:13.341853  233009 system_pods.go:89] "kindnet-ktmk2" [962be512-25e1-4c67-a7b5-f21dac1ac303] Running
	I0110 02:51:13.341859  233009 system_pods.go:89] "kube-apiserver-auto-989144" [9dce557d-11a9-406e-a89f-45347e73cfa8] Running
	I0110 02:51:13.341865  233009 system_pods.go:89] "kube-controller-manager-auto-989144" [2744eb18-2b2a-4bbb-b271-07eaccb0f0fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:51:13.341870  233009 system_pods.go:89] "kube-proxy-l9j6v" [3905bb5b-03f0-4eac-8478-4a972acad6cb] Running
	I0110 02:51:13.341876  233009 system_pods.go:89] "kube-scheduler-auto-989144" [0a34810d-f7ec-4d9c-8a8c-dcb5ae67683c] Running
	I0110 02:51:13.341886  233009 system_pods.go:89] "storage-provisioner" [c0929b7c-b8da-4b7d-b8e9-ccfe9e1bc5a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:51:13.824209  233009 system_pods.go:86] 8 kube-system pods found
	I0110 02:51:13.824236  233009 system_pods.go:89] "coredns-7d764666f9-n982k" [877f2d3b-23c5-49a4-9a27-adab42fe2451] Running
	I0110 02:51:13.824249  233009 system_pods.go:89] "etcd-auto-989144" [e2f0ccc2-a620-4ebd-8852-0e2022be1ce6] Running
	I0110 02:51:13.824255  233009 system_pods.go:89] "kindnet-ktmk2" [962be512-25e1-4c67-a7b5-f21dac1ac303] Running
	I0110 02:51:13.824259  233009 system_pods.go:89] "kube-apiserver-auto-989144" [9dce557d-11a9-406e-a89f-45347e73cfa8] Running
	I0110 02:51:13.824266  233009 system_pods.go:89] "kube-controller-manager-auto-989144" [2744eb18-2b2a-4bbb-b271-07eaccb0f0fc] Running
	I0110 02:51:13.824271  233009 system_pods.go:89] "kube-proxy-l9j6v" [3905bb5b-03f0-4eac-8478-4a972acad6cb] Running
	I0110 02:51:13.824276  233009 system_pods.go:89] "kube-scheduler-auto-989144" [0a34810d-f7ec-4d9c-8a8c-dcb5ae67683c] Running
	I0110 02:51:13.824280  233009 system_pods.go:89] "storage-provisioner" [c0929b7c-b8da-4b7d-b8e9-ccfe9e1bc5a1] Running
	I0110 02:51:13.824287  233009 system_pods.go:126] duration metric: took 1.045884994s to wait for k8s-apps to be running ...
	I0110 02:51:13.824295  233009 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:51:13.824349  233009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:51:13.838729  233009 system_svc.go:56] duration metric: took 14.423825ms WaitForService to wait for kubelet
	I0110 02:51:13.838759  233009 kubeadm.go:587] duration metric: took 15.307551856s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:51:13.838779  233009 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:51:13.841957  233009 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:51:13.842026  233009 node_conditions.go:123] node cpu capacity is 2
	I0110 02:51:13.842046  233009 node_conditions.go:105] duration metric: took 3.261035ms to run NodePressure ...
	I0110 02:51:13.842060  233009 start.go:242] waiting for startup goroutines ...
	I0110 02:51:13.842068  233009 start.go:247] waiting for cluster config update ...
	I0110 02:51:13.842079  233009 start.go:256] writing updated cluster config ...
	I0110 02:51:13.842382  233009 ssh_runner.go:195] Run: rm -f paused
	I0110 02:51:13.846419  233009 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:51:13.849777  233009 pod_ready.go:83] waiting for pod "coredns-7d764666f9-n982k" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:13.853863  233009 pod_ready.go:94] pod "coredns-7d764666f9-n982k" is "Ready"
	I0110 02:51:13.853926  233009 pod_ready.go:86] duration metric: took 4.122509ms for pod "coredns-7d764666f9-n982k" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:13.856207  233009 pod_ready.go:83] waiting for pod "etcd-auto-989144" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:13.859978  233009 pod_ready.go:94] pod "etcd-auto-989144" is "Ready"
	I0110 02:51:13.860000  233009 pod_ready.go:86] duration metric: took 3.772537ms for pod "etcd-auto-989144" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:13.862859  233009 pod_ready.go:83] waiting for pod "kube-apiserver-auto-989144" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:13.866799  233009 pod_ready.go:94] pod "kube-apiserver-auto-989144" is "Ready"
	I0110 02:51:13.866825  233009 pod_ready.go:86] duration metric: took 3.945496ms for pod "kube-apiserver-auto-989144" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:13.868987  233009 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-989144" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:14.250056  233009 pod_ready.go:94] pod "kube-controller-manager-auto-989144" is "Ready"
	I0110 02:51:14.250089  233009 pod_ready.go:86] duration metric: took 381.079052ms for pod "kube-controller-manager-auto-989144" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:14.450578  233009 pod_ready.go:83] waiting for pod "kube-proxy-l9j6v" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:14.850707  233009 pod_ready.go:94] pod "kube-proxy-l9j6v" is "Ready"
	I0110 02:51:14.850783  233009 pod_ready.go:86] duration metric: took 400.175199ms for pod "kube-proxy-l9j6v" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:15.051054  233009 pod_ready.go:83] waiting for pod "kube-scheduler-auto-989144" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:15.450683  233009 pod_ready.go:94] pod "kube-scheduler-auto-989144" is "Ready"
	I0110 02:51:15.450719  233009 pod_ready.go:86] duration metric: took 399.636103ms for pod "kube-scheduler-auto-989144" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:15.450733  233009 pod_ready.go:40] duration metric: took 1.60428708s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:51:15.510042  233009 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 02:51:15.513130  233009 out.go:203] 
	W0110 02:51:15.516640  233009 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 02:51:15.519559  233009 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:51:15.522469  233009 out.go:179] * Done! kubectl is now configured to use "auto-989144" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:51:03 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:03.607215983Z" level=info msg="Started container" PID=1694 containerID=910d0dab6a77a8bbc264e0daefb03ba43c5628340ff9b5c13ebb7ce7186fb91b description=kube-system/storage-provisioner/storage-provisioner id=91f2cf90-273e-4a96-ada8-e32de6c6faf8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2c35a4d3fd7dd89cc874840b3e55ea477b9a709b141d944875371a2b12088994
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.049785085Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.049821309Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.054783378Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.054815516Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.05880632Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.058962189Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.063360243Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.063393465Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.063452426Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.067269835Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.067307142Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.361139152Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7d2f8673-9fa6-4afb-8ad6-cbbe4ae607d6 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.363272905Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=24b52c6d-ef16-4d27-9387-bd29605e0774 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.364601625Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh/dashboard-metrics-scraper" id=5c085bbb-3b80-4a8f-915b-ed36916f86c2 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.364704727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.377031312Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.37765765Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.402445675Z" level=info msg="Created container cea7cb71f60a3bf1cc0f92af289dd1a12432224c4eec12b146d182a413e0d79a: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh/dashboard-metrics-scraper" id=5c085bbb-3b80-4a8f-915b-ed36916f86c2 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.40350483Z" level=info msg="Starting container: cea7cb71f60a3bf1cc0f92af289dd1a12432224c4eec12b146d182a413e0d79a" id=0d2c95ea-1f97-47d1-bc92-7477266ba35a name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.410560284Z" level=info msg="Started container" PID=1773 containerID=cea7cb71f60a3bf1cc0f92af289dd1a12432224c4eec12b146d182a413e0d79a description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh/dashboard-metrics-scraper id=0d2c95ea-1f97-47d1-bc92-7477266ba35a name=/runtime.v1.RuntimeService/StartContainer sandboxID=98421345179a1438c45812935ad55ee036c9c84f17a862507655df21397da3b4
	Jan 10 02:51:16 default-k8s-diff-port-403885 conmon[1771]: conmon cea7cb71f60a3bf1cc0f <ninfo>: container 1773 exited with status 1
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.618767145Z" level=info msg="Removing container: a8bfb9653382e887ee3cf9c6de048fbe30fabcad1d3c5e5fc7a07a62a94a0f5c" id=1bd78e4a-5b3f-479b-8f1c-1c20f29c6f1c name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.627393929Z" level=info msg="Error loading conmon cgroup of container a8bfb9653382e887ee3cf9c6de048fbe30fabcad1d3c5e5fc7a07a62a94a0f5c: cgroup deleted" id=1bd78e4a-5b3f-479b-8f1c-1c20f29c6f1c name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.632024322Z" level=info msg="Removed container a8bfb9653382e887ee3cf9c6de048fbe30fabcad1d3c5e5fc7a07a62a94a0f5c: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh/dashboard-metrics-scraper" id=1bd78e4a-5b3f-479b-8f1c-1c20f29c6f1c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	cea7cb71f60a3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago       Exited              dashboard-metrics-scraper   3                   98421345179a1       dashboard-metrics-scraper-867fb5f87b-ngnzh             kubernetes-dashboard
	910d0dab6a77a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   2c35a4d3fd7dd       storage-provisioner                                    kube-system
	c189fb7bad01f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago      Running             kubernetes-dashboard        0                   9c44612b5f45d       kubernetes-dashboard-b84665fb8-l5llr                   kubernetes-dashboard
	fa1837feaa39e       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           49 seconds ago      Running             coredns                     1                   879d5ffee7c26       coredns-7d764666f9-sck2c                               kube-system
	b6acf28c89529       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago      Running             busybox                     1                   08dcbe9c1056c       busybox                                                default
	bc847f22b7096       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           50 seconds ago      Running             kindnet-cni                 1                   817278ab3546b       kindnet-4h8vm                                          kube-system
	b162bf1f9ee4d       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           50 seconds ago      Running             kube-proxy                  1                   606888b668327       kube-proxy-ss9fs                                       kube-system
	f3a4dab3b3499       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago      Exited              storage-provisioner         1                   2c35a4d3fd7dd       storage-provisioner                                    kube-system
	ccaef7514d5ac       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           56 seconds ago      Running             kube-apiserver              1                   089af88f8330e       kube-apiserver-default-k8s-diff-port-403885            kube-system
	0a4524e2475eb       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           56 seconds ago      Running             etcd                        1                   ec3eacb0b9c47       etcd-default-k8s-diff-port-403885                      kube-system
	73f1ff6161183       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           56 seconds ago      Running             kube-scheduler              1                   c2f50ce98fc5a       kube-scheduler-default-k8s-diff-port-403885            kube-system
	3eef2c483d9e9       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           56 seconds ago      Running             kube-controller-manager     1                   024569b12df18       kube-controller-manager-default-k8s-diff-port-403885   kube-system
	
	
	==> coredns [fa1837feaa39e020154dcf1fa0e3cfdcc389657a13b2a7911522e055b8d5c205] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:53725 - 54010 "HINFO IN 2023192538793103702.2930508901437709455. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.065750182s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-403885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-403885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=default-k8s-diff-port-403885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_49_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:49:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-403885
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:51:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:51:02 +0000   Sat, 10 Jan 2026 02:49:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:51:02 +0000   Sat, 10 Jan 2026 02:49:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:51:02 +0000   Sat, 10 Jan 2026 02:49:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:51:02 +0000   Sat, 10 Jan 2026 02:49:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-403885
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                c5be33d9-0382-423b-9b90-3c979c14f2d9
	  Boot ID:                    41f52236-76b9-4dc8-bfe3-121fbdeb9659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-7d764666f9-sck2c                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     109s
	  kube-system                 etcd-default-k8s-diff-port-403885                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         114s
	  kube-system                 kindnet-4h8vm                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-default-k8s-diff-port-403885             250m (12%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-403885    200m (10%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-ss9fs                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-default-k8s-diff-port-403885             100m (5%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-ngnzh              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-l5llr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  110s  node-controller  Node default-k8s-diff-port-403885 event: Registered Node default-k8s-diff-port-403885 in Controller
	  Normal  RegisteredNode  48s   node-controller  Node default-k8s-diff-port-403885 event: Registered Node default-k8s-diff-port-403885 in Controller
	
	
	==> dmesg <==
	[ +23.212409] overlayfs: idmapped layers are currently not supported
	[  +9.866169] overlayfs: idmapped layers are currently not supported
	[Jan10 02:19] overlayfs: idmapped layers are currently not supported
	[Jan10 02:20] overlayfs: idmapped layers are currently not supported
	[ +35.960240] overlayfs: idmapped layers are currently not supported
	[Jan10 02:21] overlayfs: idmapped layers are currently not supported
	[Jan10 02:23] overlayfs: idmapped layers are currently not supported
	[Jan10 02:24] overlayfs: idmapped layers are currently not supported
	[Jan10 02:25] overlayfs: idmapped layers are currently not supported
	[Jan10 02:30] overlayfs: idmapped layers are currently not supported
	[Jan10 02:31] overlayfs: idmapped layers are currently not supported
	[Jan10 02:35] overlayfs: idmapped layers are currently not supported
	[Jan10 02:37] overlayfs: idmapped layers are currently not supported
	[Jan10 02:41] overlayfs: idmapped layers are currently not supported
	[ +37.534021] overlayfs: idmapped layers are currently not supported
	[Jan10 02:43] overlayfs: idmapped layers are currently not supported
	[Jan10 02:44] overlayfs: idmapped layers are currently not supported
	[Jan10 02:45] overlayfs: idmapped layers are currently not supported
	[Jan10 02:46] overlayfs: idmapped layers are currently not supported
	[Jan10 02:48] overlayfs: idmapped layers are currently not supported
	[Jan10 02:49] overlayfs: idmapped layers are currently not supported
	[  +4.690964] overlayfs: idmapped layers are currently not supported
	[ +26.361261] overlayfs: idmapped layers are currently not supported
	[Jan10 02:50] overlayfs: idmapped layers are currently not supported
	[ +20.145083] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0a4524e2475eb21f6dbb36bb2158a508c31075979547ca9c30367ced5eab40f6] <==
	{"level":"info","ts":"2026-01-10T02:50:27.233962Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:50:27.234041Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T02:50:27.263214Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-10T02:50:27.263893Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-10T02:50:27.263907Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-10T02:50:27.264111Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T02:50:27.264135Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T02:50:27.658230Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:50:27.658300Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:50:27.658340Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T02:50:27.658351Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:50:27.658365Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:50:27.662390Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T02:50:27.662432Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:50:27.662450Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:50:27.662466Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T02:50:27.664051Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-403885 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:50:27.664089Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:50:27.664124Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:50:27.665108Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:50:27.666981Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:50:27.667619Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:50:27.705515Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-10T02:50:27.723901Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:50:27.723954Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 02:51:23 up  1:33,  0 user,  load average: 2.80, 2.67, 2.18
	Linux default-k8s-diff-port-403885 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bc847f22b7096dcdd6f43e2f7fd0bb0bec20221d544edea551adf64088c8d1f9] <==
	I0110 02:50:32.853398       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:50:32.924095       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0110 02:50:32.924300       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:50:32.924341       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:50:32.924384       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:50:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:50:33.037827       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:50:33.123844       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:50:33.123956       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:50:33.125579       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0110 02:51:03.041493       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0110 02:51:03.125140       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0110 02:51:03.125140       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0110 02:51:03.125234       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I0110 02:51:04.424453       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:51:04.424485       1 metrics.go:72] Registering metrics
	I0110 02:51:04.424536       1 controller.go:711] "Syncing nftables rules"
	I0110 02:51:13.043966       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:51:13.044622       1 main.go:301] handling current node
	I0110 02:51:23.039881       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:51:23.039919       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ccaef7514d5ac77efad7a1f91ebd93145b03e3462b1154eb3f723d4a112acdf6] <==
	I0110 02:50:31.370526       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0110 02:50:31.370601       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 02:50:31.370670       1 aggregator.go:187] initial CRD sync complete...
	I0110 02:50:31.370678       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 02:50:31.370684       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:50:31.370689       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:50:31.379256       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 02:50:31.379361       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:31.379378       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:31.382379       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 02:50:31.382399       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 02:50:31.386037       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 02:50:31.399310       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E0110 02:50:31.431496       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 02:50:31.912001       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:50:31.957172       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:50:32.048049       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:50:32.135645       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:50:32.166683       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:50:32.196204       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:50:32.413516       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.64.212"}
	I0110 02:50:32.477425       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.226.81"}
	I0110 02:50:34.934657       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:50:34.978953       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:50:35.032773       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3eef2c483d9e9a9a8a3ab6bf3e8cf944fedbac4b9d208f8d1d0c7a3086f100ac] <==
	I0110 02:50:34.421475       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.421554       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.421601       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.421656       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.421759       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 02:50:34.421869       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-403885"
	I0110 02:50:34.421962       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 02:50:34.422025       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.428128       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.432679       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.432774       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.433425       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.433505       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.433581       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.433757       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.434934       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.435044       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.448722       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.452266       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.458815       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.513245       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.513364       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:50:34.513411       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:50:34.521275       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.986333       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [b162bf1f9ee4d4f795e11b6f38b571c7fe566f4233b8a46e579adb8bdc2bdc39] <==
	I0110 02:50:32.949790       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:50:33.025628       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:50:33.126837       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:33.126927       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0110 02:50:33.127100       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:50:33.147089       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:50:33.147151       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:50:33.151096       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:50:33.151450       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:50:33.151515       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:50:33.154566       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:50:33.154587       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:50:33.154848       1 config.go:200] "Starting service config controller"
	I0110 02:50:33.154864       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:50:33.155169       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:50:33.155226       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:50:33.168380       1 config.go:309] "Starting node config controller"
	I0110 02:50:33.168404       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:50:33.168412       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:50:33.255874       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:50:33.255887       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 02:50:33.255902       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [73f1ff616118310e5ea35e9ab5496ff345a48d4dbb2a0c33eba742bc17a20097] <==
	I0110 02:50:29.333855       1 serving.go:386] Generated self-signed cert in-memory
	W0110 02:50:31.096255       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 02:50:31.096390       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 02:50:31.096425       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 02:50:31.096479       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 02:50:31.289002       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 02:50:31.291863       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:50:31.294050       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:50:31.294067       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:50:31.294670       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 02:50:31.294952       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 02:50:31.394202       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:50:45 default-k8s-diff-port-403885 kubelet[797]: E0110 02:50:45.937099     797 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-403885" containerName="etcd"
	Jan 10 02:50:46 default-k8s-diff-port-403885 kubelet[797]: E0110 02:50:46.530086     797 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-403885" containerName="etcd"
	Jan 10 02:50:49 default-k8s-diff-port-403885 kubelet[797]: E0110 02:50:49.540105     797 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-l5llr" containerName="kubernetes-dashboard"
	Jan 10 02:50:50 default-k8s-diff-port-403885 kubelet[797]: E0110 02:50:50.542509     797 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-l5llr" containerName="kubernetes-dashboard"
	Jan 10 02:50:53 default-k8s-diff-port-403885 kubelet[797]: E0110 02:50:53.359676     797 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh" containerName="dashboard-metrics-scraper"
	Jan 10 02:50:53 default-k8s-diff-port-403885 kubelet[797]: I0110 02:50:53.360213     797 scope.go:122] "RemoveContainer" containerID="428237fb0141d71c924e94787d4ef3230f609869b503542b047b35082b621ef2"
	Jan 10 02:50:53 default-k8s-diff-port-403885 kubelet[797]: I0110 02:50:53.551766     797 scope.go:122] "RemoveContainer" containerID="428237fb0141d71c924e94787d4ef3230f609869b503542b047b35082b621ef2"
	Jan 10 02:50:53 default-k8s-diff-port-403885 kubelet[797]: E0110 02:50:53.552510     797 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh" containerName="dashboard-metrics-scraper"
	Jan 10 02:50:53 default-k8s-diff-port-403885 kubelet[797]: I0110 02:50:53.552537     797 scope.go:122] "RemoveContainer" containerID="a8bfb9653382e887ee3cf9c6de048fbe30fabcad1d3c5e5fc7a07a62a94a0f5c"
	Jan 10 02:50:53 default-k8s-diff-port-403885 kubelet[797]: E0110 02:50:53.552784     797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ngnzh_kubernetes-dashboard(ba3001f4-1a32-4808-bd59-6d66fe5a0867)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh" podUID="ba3001f4-1a32-4808-bd59-6d66fe5a0867"
	Jan 10 02:50:53 default-k8s-diff-port-403885 kubelet[797]: I0110 02:50:53.569244     797 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-l5llr" podStartSLOduration=4.94752396 podStartE2EDuration="18.568455136s" podCreationTimestamp="2026-01-10 02:50:35 +0000 UTC" firstStartedPulling="2026-01-10 02:50:35.841719964 +0000 UTC m=+10.794840923" lastFinishedPulling="2026-01-10 02:50:49.462651139 +0000 UTC m=+24.415772099" observedRunningTime="2026-01-10 02:50:49.558215848 +0000 UTC m=+24.511336816" watchObservedRunningTime="2026-01-10 02:50:53.568455136 +0000 UTC m=+28.521576096"
	Jan 10 02:50:55 default-k8s-diff-port-403885 kubelet[797]: E0110 02:50:55.769826     797 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh" containerName="dashboard-metrics-scraper"
	Jan 10 02:50:55 default-k8s-diff-port-403885 kubelet[797]: I0110 02:50:55.769875     797 scope.go:122] "RemoveContainer" containerID="a8bfb9653382e887ee3cf9c6de048fbe30fabcad1d3c5e5fc7a07a62a94a0f5c"
	Jan 10 02:50:55 default-k8s-diff-port-403885 kubelet[797]: E0110 02:50:55.770033     797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ngnzh_kubernetes-dashboard(ba3001f4-1a32-4808-bd59-6d66fe5a0867)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh" podUID="ba3001f4-1a32-4808-bd59-6d66fe5a0867"
	Jan 10 02:51:03 default-k8s-diff-port-403885 kubelet[797]: I0110 02:51:03.577016     797 scope.go:122] "RemoveContainer" containerID="f3a4dab3b349942b6b00dff9e9ac2e0626b5c7d8890cd8aa838981f79a2d1c23"
	Jan 10 02:51:05 default-k8s-diff-port-403885 kubelet[797]: E0110 02:51:05.432211     797 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-sck2c" containerName="coredns"
	Jan 10 02:51:16 default-k8s-diff-port-403885 kubelet[797]: E0110 02:51:16.359963     797 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh" containerName="dashboard-metrics-scraper"
	Jan 10 02:51:16 default-k8s-diff-port-403885 kubelet[797]: I0110 02:51:16.360506     797 scope.go:122] "RemoveContainer" containerID="a8bfb9653382e887ee3cf9c6de048fbe30fabcad1d3c5e5fc7a07a62a94a0f5c"
	Jan 10 02:51:16 default-k8s-diff-port-403885 kubelet[797]: I0110 02:51:16.616111     797 scope.go:122] "RemoveContainer" containerID="a8bfb9653382e887ee3cf9c6de048fbe30fabcad1d3c5e5fc7a07a62a94a0f5c"
	Jan 10 02:51:16 default-k8s-diff-port-403885 kubelet[797]: E0110 02:51:16.616689     797 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh" containerName="dashboard-metrics-scraper"
	Jan 10 02:51:16 default-k8s-diff-port-403885 kubelet[797]: I0110 02:51:16.616718     797 scope.go:122] "RemoveContainer" containerID="cea7cb71f60a3bf1cc0f92af289dd1a12432224c4eec12b146d182a413e0d79a"
	Jan 10 02:51:16 default-k8s-diff-port-403885 kubelet[797]: E0110 02:51:16.617212     797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ngnzh_kubernetes-dashboard(ba3001f4-1a32-4808-bd59-6d66fe5a0867)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh" podUID="ba3001f4-1a32-4808-bd59-6d66fe5a0867"
	Jan 10 02:51:20 default-k8s-diff-port-403885 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:51:20 default-k8s-diff-port-403885 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:51:20 default-k8s-diff-port-403885 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c189fb7bad01fb39bc337c1bc6565e786ee9e056431d9067b94b07b97b2427cc] <==
	2026/01/10 02:50:49 Using namespace: kubernetes-dashboard
	2026/01/10 02:50:49 Using in-cluster config to connect to apiserver
	2026/01/10 02:50:49 Using secret token for csrf signing
	2026/01/10 02:50:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 02:50:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 02:50:49 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 02:50:49 Generating JWE encryption key
	2026/01/10 02:50:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 02:50:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 02:50:50 Initializing JWE encryption key from synchronized object
	2026/01/10 02:50:50 Creating in-cluster Sidecar client
	2026/01/10 02:50:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:50:50 Serving insecurely on HTTP port: 9090
	2026/01/10 02:51:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:50:49 Starting overwatch
	
	
	==> storage-provisioner [910d0dab6a77a8bbc264e0daefb03ba43c5628340ff9b5c13ebb7ce7186fb91b] <==
	I0110 02:51:03.632713       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:51:03.647339       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:51:03.647458       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 02:51:03.651108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:51:07.106527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:51:11.366929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:51:14.965034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:51:18.018688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:51:21.040715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:51:21.048408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:51:21.048615       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:51:21.048785       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-403885_ed02fb7d-452a-42e6-bd52-572a9babe278!
	I0110 02:51:21.049427       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa1af1bb-334c-4465-9421-7ffe1f5fe2f3", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-403885_ed02fb7d-452a-42e6-bd52-572a9babe278 became leader
	W0110 02:51:21.058015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:51:21.061379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:51:21.149220       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-403885_ed02fb7d-452a-42e6-bd52-572a9babe278!
	W0110 02:51:23.065042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:51:23.071965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f3a4dab3b349942b6b00dff9e9ac2e0626b5c7d8890cd8aa838981f79a2d1c23] <==
	I0110 02:50:32.924734       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 02:51:02.926688       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-403885 -n default-k8s-diff-port-403885
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-403885 -n default-k8s-diff-port-403885: exit status 2 (438.21761ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-403885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-403885
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-403885:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82",
	        "Created": "2026-01-10T02:49:07.169528975Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 232060,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T02:50:17.249244298Z",
	            "FinishedAt": "2026-01-10T02:50:15.588512013Z"
	        },
	        "Image": "sha256:61b418c457107ee7d9335f5e03d8e7ecced6bcc2627a71ae5411ca466c7b614b",
	        "ResolvConfPath": "/var/lib/docker/containers/68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82/hostname",
	        "HostsPath": "/var/lib/docker/containers/68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82/hosts",
	        "LogPath": "/var/lib/docker/containers/68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82/68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82-json.log",
	        "Name": "/default-k8s-diff-port-403885",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-403885:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-403885",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "68becb0d3e52aa8382b4740bd6ddcc35bdfee6a5501059560ca4b5b88dc1ba82",
	                "LowerDir": "/var/lib/docker/overlay2/d01d645ede148d5aa173ad4f705d7224af0ded80634c399615f32176e47ffff2-init/diff:/var/lib/docker/overlay2/2b235871be32ebd3e55c0a490bf5b289824c6be94b4d2763f5bca3b814af3fd1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d01d645ede148d5aa173ad4f705d7224af0ded80634c399615f32176e47ffff2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d01d645ede148d5aa173ad4f705d7224af0ded80634c399615f32176e47ffff2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d01d645ede148d5aa173ad4f705d7224af0ded80634c399615f32176e47ffff2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-403885",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-403885/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-403885",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-403885",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-403885",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dc182adaacf2631bf689d67ee793e1431bb8c1879d8cf590f0bce4e4ffd1712e",
	            "SandboxKey": "/var/run/docker/netns/dc182adaacf2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-403885": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:2f:ea:4b:05:77",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "634ff5f67b9d8e0ced560118940fddc59ecaca247334cc034944724496472f4d",
	                    "EndpointID": "72c1c2908f39f742654576e114fe2b624310b39150f18e6a071299b3a9fd1ee4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-403885",
	                        "68becb0d3e52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-403885 -n default-k8s-diff-port-403885
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-403885 -n default-k8s-diff-port-403885: exit status 2 (332.714564ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-403885 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-403885 logs -n 25: (1.549872s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ delete  │ -p no-preload-676905                                                                                                                                                                                                                          │ no-preload-676905                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ delete  │ -p no-preload-676905                                                                                                                                                                                                                          │ no-preload-676905                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ start   │ -p newest-cni-733680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-733680                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ addons  │ enable metrics-server -p newest-cni-733680 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-733680                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │                     │
	│ stop    │ -p newest-cni-733680 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-733680                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ addons  │ enable dashboard -p newest-cni-733680 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-733680                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ start   │ -p newest-cni-733680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-733680                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ image   │ newest-cni-733680 image list --format=json                                                                                                                                                                                                    │ newest-cni-733680                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │ 10 Jan 26 02:49 UTC │
	│ pause   │ -p newest-cni-733680 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-733680                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:49 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-403885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-403885      │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-403885 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-403885      │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │ 10 Jan 26 02:50 UTC │
	│ delete  │ -p newest-cni-733680                                                                                                                                                                                                                          │ newest-cni-733680                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │ 10 Jan 26 02:50 UTC │
	│ delete  │ -p newest-cni-733680                                                                                                                                                                                                                          │ newest-cni-733680                 │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │ 10 Jan 26 02:50 UTC │
	│ start   │ -p test-preload-dl-gcs-876828 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-876828        │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-876828                                                                                                                                                                                                                 │ test-preload-dl-gcs-876828        │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │ 10 Jan 26 02:50 UTC │
	│ start   │ -p test-preload-dl-github-831481 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-831481     │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-403885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-403885      │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │ 10 Jan 26 02:50 UTC │
	│ start   │ -p default-k8s-diff-port-403885 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-403885      │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │ 10 Jan 26 02:51 UTC │
	│ delete  │ -p test-preload-dl-github-831481                                                                                                                                                                                                              │ test-preload-dl-github-831481     │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │ 10 Jan 26 02:50 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-145587 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-145587 │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-145587                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-145587 │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │ 10 Jan 26 02:50 UTC │
	│ start   │ -p auto-989144 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-989144                       │ jenkins │ v1.37.0 │ 10 Jan 26 02:50 UTC │ 10 Jan 26 02:51 UTC │
	│ ssh     │ -p auto-989144 pgrep -a kubelet                                                                                                                                                                                                               │ auto-989144                       │ jenkins │ v1.37.0 │ 10 Jan 26 02:51 UTC │ 10 Jan 26 02:51 UTC │
	│ image   │ default-k8s-diff-port-403885 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-403885      │ jenkins │ v1.37.0 │ 10 Jan 26 02:51 UTC │ 10 Jan 26 02:51 UTC │
	│ pause   │ -p default-k8s-diff-port-403885 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-403885      │ jenkins │ v1.37.0 │ 10 Jan 26 02:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 02:50:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 02:50:23.372009  233009 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:50:23.372185  233009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:50:23.372210  233009 out.go:374] Setting ErrFile to fd 2...
	I0110 02:50:23.372231  233009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:50:23.373715  233009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:50:23.374195  233009 out.go:368] Setting JSON to false
	I0110 02:50:23.374973  233009 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5573,"bootTime":1768007851,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:50:23.375039  233009 start.go:143] virtualization:  
	I0110 02:50:23.378020  233009 out.go:179] * [auto-989144] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:50:23.381958  233009 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:50:23.382035  233009 notify.go:221] Checking for updates...
	I0110 02:50:23.387763  233009 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:50:23.390637  233009 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:50:23.393437  233009 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:50:23.396125  233009 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:50:23.398973  233009 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:50:23.402357  233009 config.go:182] Loaded profile config "default-k8s-diff-port-403885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:50:23.402464  233009 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:50:23.436154  233009 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:50:23.436273  233009 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:50:23.540633  233009 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-10 02:50:23.531381029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:50:23.540732  233009 docker.go:319] overlay module found
	I0110 02:50:23.543871  233009 out.go:179] * Using the docker driver based on user configuration
	I0110 02:50:23.546696  233009 start.go:309] selected driver: docker
	I0110 02:50:23.546710  233009 start.go:928] validating driver "docker" against <nil>
	I0110 02:50:23.546723  233009 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:50:23.547420  233009 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:50:23.650203  233009 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-10 02:50:23.639466846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:50:23.650372  233009 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 02:50:23.650584  233009 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:50:23.653456  233009 out.go:179] * Using Docker driver with root privileges
	I0110 02:50:23.656327  233009 cni.go:84] Creating CNI manager for ""
	I0110 02:50:23.656384  233009 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:50:23.656395  233009 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 02:50:23.656465  233009 start.go:353] cluster config:
	{Name:auto-989144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-989144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s Rosetta:false}
	I0110 02:50:23.659502  233009 out.go:179] * Starting "auto-989144" primary control-plane node in "auto-989144" cluster
	I0110 02:50:23.662349  233009 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 02:50:23.665377  233009 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 02:50:23.668185  233009 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:50:23.668232  233009 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
	I0110 02:50:23.668241  233009 cache.go:65] Caching tarball of preloaded images
	I0110 02:50:23.668323  233009 preload.go:251] Found /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0110 02:50:23.668340  233009 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 02:50:23.668439  233009 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/config.json ...
	I0110 02:50:23.668457  233009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/config.json: {Name:mk0723e954b06aa6ef935da4b050e021f40f239a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:23.668598  233009 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 02:50:23.688834  233009 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 02:50:23.688851  233009 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 02:50:23.688865  233009 cache.go:243] Successfully downloaded all kic artifacts
	I0110 02:50:23.688896  233009 start.go:360] acquireMachinesLock for auto-989144: {Name:mk0c135b81631369a38f59e1c5f17ccfdae85af7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 02:50:23.688991  233009 start.go:364] duration metric: took 79.382µs to acquireMachinesLock for "auto-989144"
	I0110 02:50:23.689018  233009 start.go:93] Provisioning new machine with config: &{Name:auto-989144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-989144 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:50:23.689076  233009 start.go:125] createHost starting for "" (driver="docker")
	I0110 02:50:22.302621  231933 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:50:22.302641  231933 machine.go:97] duration metric: took 4.672926946s to provisionDockerMachine
	I0110 02:50:22.302652  231933 start.go:293] postStartSetup for "default-k8s-diff-port-403885" (driver="docker")
	I0110 02:50:22.302663  231933 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:50:22.302722  231933 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:50:22.302818  231933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:50:22.342465  231933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:50:22.458157  231933 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:50:22.462035  231933 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:50:22.462064  231933 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:50:22.462075  231933 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:50:22.462128  231933 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:50:22.462211  231933 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:50:22.462333  231933 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:50:22.470591  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:50:22.489701  231933 start.go:296] duration metric: took 187.03564ms for postStartSetup
	I0110 02:50:22.489777  231933 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:50:22.489818  231933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:50:22.516545  231933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:50:22.627373  231933 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:50:22.633725  231933 fix.go:56] duration metric: took 5.447620162s for fixHost
	I0110 02:50:22.633758  231933 start.go:83] releasing machines lock for "default-k8s-diff-port-403885", held for 5.447667291s
	I0110 02:50:22.633823  231933 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-403885
	I0110 02:50:22.656691  231933 ssh_runner.go:195] Run: cat /version.json
	I0110 02:50:22.656742  231933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:50:22.656987  231933 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:50:22.657038  231933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:50:22.696944  231933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:50:22.707568  231933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:50:22.811430  231933 ssh_runner.go:195] Run: systemctl --version
	I0110 02:50:22.935076  231933 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:50:22.997268  231933 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:50:23.004470  231933 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:50:23.004550  231933 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:50:23.014132  231933 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 02:50:23.014153  231933 start.go:496] detecting cgroup driver to use...
	I0110 02:50:23.014184  231933 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:50:23.014247  231933 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:50:23.032426  231933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:50:23.051852  231933 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:50:23.051924  231933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:50:23.068669  231933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:50:23.082838  231933 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:50:23.219258  231933 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:50:23.369701  231933 docker.go:234] disabling docker service ...
	I0110 02:50:23.369786  231933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:50:23.388922  231933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:50:23.409066  231933 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:50:23.561096  231933 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:50:23.700318  231933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:50:23.714462  231933 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:50:23.736792  231933 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:50:23.736869  231933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:23.748771  231933 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:50:23.748840  231933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:23.759377  231933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:23.773558  231933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:23.793451  231933 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:50:23.804610  231933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:23.817418  231933 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:23.828824  231933 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:23.842570  231933 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:50:23.856006  231933 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:50:23.864115  231933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:50:23.999249  231933 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:50:24.357558  231933 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:50:24.357626  231933 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:50:24.362500  231933 start.go:574] Will wait 60s for crictl version
	I0110 02:50:24.362561  231933 ssh_runner.go:195] Run: which crictl
	I0110 02:50:24.366788  231933 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:50:24.409722  231933 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:50:24.409806  231933 ssh_runner.go:195] Run: crio --version
	I0110 02:50:24.450944  231933 ssh_runner.go:195] Run: crio --version
	I0110 02:50:24.504630  231933 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:50:24.507576  231933 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-403885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:50:24.534708  231933 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 02:50:24.538514  231933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:50:24.548455  231933 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-403885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-403885 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:50:24.548578  231933 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:50:24.548636  231933 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:50:24.602456  231933 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:50:24.602477  231933 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:50:24.602532  231933 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:50:24.653825  231933 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:50:24.653857  231933 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:50:24.653866  231933 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 crio true true} ...
	I0110 02:50:24.653981  231933 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-403885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-403885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:50:24.654091  231933 ssh_runner.go:195] Run: crio config
	I0110 02:50:24.743472  231933 cni.go:84] Creating CNI manager for ""
	I0110 02:50:24.743496  231933 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:50:24.743524  231933 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:50:24.743548  231933 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-403885 NodeName:default-k8s-diff-port-403885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:50:24.743677  231933 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-403885"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:50:24.743757  231933 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:50:24.755908  231933 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:50:24.755980  231933 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:50:24.772062  231933 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0110 02:50:24.804113  231933 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:50:24.818583  231933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2245 bytes)
	I0110 02:50:24.833525  231933 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:50:24.837865  231933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:50:24.849935  231933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:50:25.021584  231933 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:50:25.040414  231933 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885 for IP: 192.168.85.2
	I0110 02:50:25.040438  231933 certs.go:195] generating shared ca certs ...
	I0110 02:50:25.040456  231933 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:25.040603  231933 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:50:25.040661  231933 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:50:25.040673  231933 certs.go:257] generating profile certs ...
	I0110 02:50:25.040763  231933 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.key
	I0110 02:50:25.040836  231933 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/apiserver.key.f53c6d08
	I0110 02:50:25.040883  231933 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/proxy-client.key
	I0110 02:50:25.041006  231933 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:50:25.041043  231933 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:50:25.041057  231933 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:50:25.041089  231933 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:50:25.041117  231933 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:50:25.041150  231933 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:50:25.041200  231933 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:50:25.041778  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:50:25.110027  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:50:25.153950  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:50:25.209707  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:50:25.279447  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0110 02:50:25.341608  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 02:50:25.375197  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:50:25.396332  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 02:50:25.415172  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:50:25.442401  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:50:25.475556  231933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:50:25.501102  231933 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:50:25.518992  231933 ssh_runner.go:195] Run: openssl version
	I0110 02:50:25.528842  231933 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:50:25.537896  231933 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:50:25.546652  231933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:50:25.551051  231933 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:50:25.551171  231933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:50:25.596004  231933 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:50:25.605363  231933 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:50:25.614097  231933 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:50:25.623265  231933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:50:25.631222  231933 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:50:25.631355  231933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:50:25.678253  231933 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:50:25.687422  231933 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:50:25.696236  231933 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:50:25.705170  231933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:50:25.709814  231933 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:50:25.709933  231933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:50:25.756522  231933 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:50:25.766725  231933 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:50:25.787086  231933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 02:50:25.888303  231933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 02:50:25.988096  231933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 02:50:26.082286  231933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 02:50:26.198308  231933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 02:50:26.377357  231933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 02:50:26.492703  231933 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-403885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-403885 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:50:26.492855  231933 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:50:26.492961  231933 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:50:26.586424  231933 cri.go:96] found id: "ccaef7514d5ac77efad7a1f91ebd93145b03e3462b1154eb3f723d4a112acdf6"
	I0110 02:50:26.586499  231933 cri.go:96] found id: "0a4524e2475eb21f6dbb36bb2158a508c31075979547ca9c30367ced5eab40f6"
	I0110 02:50:26.586533  231933 cri.go:96] found id: "73f1ff616118310e5ea35e9ab5496ff345a48d4dbb2a0c33eba742bc17a20097"
	I0110 02:50:26.586565  231933 cri.go:96] found id: "3eef2c483d9e9a9a8a3ab6bf3e8cf944fedbac4b9d208f8d1d0c7a3086f100ac"
	I0110 02:50:26.586584  231933 cri.go:96] found id: ""
	I0110 02:50:26.586668  231933 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 02:50:26.639418  231933 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T02:50:26Z" level=error msg="open /run/runc: no such file or directory"
	I0110 02:50:26.639575  231933 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:50:26.668217  231933 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 02:50:26.668297  231933 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 02:50:26.668394  231933 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 02:50:26.686454  231933 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 02:50:26.686959  231933 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-403885" does not appear in /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:50:26.687113  231933 kubeconfig.go:62] /home/jenkins/minikube-integration/22414-2353/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-403885" cluster setting kubeconfig missing "default-k8s-diff-port-403885" context setting]
	I0110 02:50:26.687476  231933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:26.689352  231933 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 02:50:26.705803  231933 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I0110 02:50:26.705885  231933 kubeadm.go:602] duration metric: took 37.568468ms to restartPrimaryControlPlane
	I0110 02:50:26.705918  231933 kubeadm.go:403] duration metric: took 213.223882ms to StartCluster
	I0110 02:50:26.705969  231933 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:26.706070  231933 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:50:26.706763  231933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:26.707041  231933 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:50:26.707519  231933 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:50:26.707595  231933 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-403885"
	I0110 02:50:26.707608  231933 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-403885"
	W0110 02:50:26.707614  231933 addons.go:248] addon storage-provisioner should already be in state true
	I0110 02:50:26.707635  231933 host.go:66] Checking if "default-k8s-diff-port-403885" exists ...
	I0110 02:50:26.709713  231933 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-403885 --format={{.State.Status}}
	I0110 02:50:26.709978  231933 config.go:182] Loaded profile config "default-k8s-diff-port-403885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:50:26.710073  231933 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-403885"
	I0110 02:50:26.710104  231933 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-403885"
	W0110 02:50:26.710138  231933 addons.go:248] addon dashboard should already be in state true
	I0110 02:50:26.710188  231933 host.go:66] Checking if "default-k8s-diff-port-403885" exists ...
	I0110 02:50:26.710711  231933 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-403885 --format={{.State.Status}}
	I0110 02:50:26.711104  231933 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-403885"
	I0110 02:50:26.711132  231933 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-403885"
	I0110 02:50:26.711396  231933 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-403885 --format={{.State.Status}}
	I0110 02:50:26.715584  231933 out.go:179] * Verifying Kubernetes components...
	I0110 02:50:26.723363  231933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:50:26.783964  231933 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:50:26.787608  231933 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:50:26.787632  231933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:50:26.787709  231933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:50:26.790296  231933 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-403885"
	W0110 02:50:26.790320  231933 addons.go:248] addon default-storageclass should already be in state true
	I0110 02:50:26.790345  231933 host.go:66] Checking if "default-k8s-diff-port-403885" exists ...
	I0110 02:50:26.790940  231933 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-403885 --format={{.State.Status}}
	I0110 02:50:26.791902  231933 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 02:50:26.799937  231933 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 02:50:26.819906  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 02:50:26.819934  231933 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 02:50:26.820017  231933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:50:26.825662  231933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:50:26.840449  231933 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:50:26.840471  231933 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:50:26.840540  231933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-403885
	I0110 02:50:26.871822  231933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:50:26.892930  231933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/default-k8s-diff-port-403885/id_rsa Username:docker}
	I0110 02:50:23.692443  233009 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 02:50:23.692665  233009 start.go:159] libmachine.API.Create for "auto-989144" (driver="docker")
	I0110 02:50:23.692691  233009 client.go:173] LocalClient.Create starting
	I0110 02:50:23.692759  233009 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem
	I0110 02:50:23.692792  233009 main.go:144] libmachine: Decoding PEM data...
	I0110 02:50:23.692806  233009 main.go:144] libmachine: Parsing certificate...
	I0110 02:50:23.692858  233009 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem
	I0110 02:50:23.692874  233009 main.go:144] libmachine: Decoding PEM data...
	I0110 02:50:23.692885  233009 main.go:144] libmachine: Parsing certificate...
	I0110 02:50:23.693267  233009 cli_runner.go:164] Run: docker network inspect auto-989144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 02:50:23.712449  233009 cli_runner.go:211] docker network inspect auto-989144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 02:50:23.712518  233009 network_create.go:284] running [docker network inspect auto-989144] to gather additional debugging logs...
	I0110 02:50:23.712546  233009 cli_runner.go:164] Run: docker network inspect auto-989144
	W0110 02:50:23.730771  233009 cli_runner.go:211] docker network inspect auto-989144 returned with exit code 1
	I0110 02:50:23.730819  233009 network_create.go:287] error running [docker network inspect auto-989144]: docker network inspect auto-989144: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-989144 not found
	I0110 02:50:23.730832  233009 network_create.go:289] output of [docker network inspect auto-989144]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-989144 not found
	
	** /stderr **
	I0110 02:50:23.730923  233009 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:50:23.747591  233009 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dba69832168e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:28:cf:b8:5c:6c} reservation:<nil>}
	I0110 02:50:23.747982  233009 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1bcca9209747 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:d2:8b:83:a4:43:ae} reservation:<nil>}
	I0110 02:50:23.748295  233009 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-383ddfacc8f8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:26:ff:fb:97:66:af} reservation:<nil>}
	I0110 02:50:23.749561  233009 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2abe0}
	I0110 02:50:23.749587  233009 network_create.go:124] attempt to create docker network auto-989144 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0110 02:50:23.749643  233009 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-989144 auto-989144
	I0110 02:50:23.838431  233009 network_create.go:108] docker network auto-989144 192.168.76.0/24 created
	I0110 02:50:23.838460  233009 kic.go:121] calculated static IP "192.168.76.2" for the "auto-989144" container
	I0110 02:50:23.838535  233009 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 02:50:23.855728  233009 cli_runner.go:164] Run: docker volume create auto-989144 --label name.minikube.sigs.k8s.io=auto-989144 --label created_by.minikube.sigs.k8s.io=true
	I0110 02:50:23.881245  233009 oci.go:103] Successfully created a docker volume auto-989144
	I0110 02:50:23.881332  233009 cli_runner.go:164] Run: docker run --rm --name auto-989144-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-989144 --entrypoint /usr/bin/test -v auto-989144:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 02:50:24.714953  233009 oci.go:107] Successfully prepared a docker volume auto-989144
	I0110 02:50:24.715023  233009 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:50:24.715042  233009 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 02:50:24.715118  233009 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-989144:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 02:50:27.194825  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 02:50:27.194852  231933 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 02:50:27.256561  231933 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:50:27.277405  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 02:50:27.277429  231933 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 02:50:27.292312  231933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:50:27.331152  231933 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-403885" to be "Ready" ...
	I0110 02:50:27.337064  231933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:50:27.349338  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 02:50:27.349364  231933 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 02:50:27.451462  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 02:50:27.451485  231933 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 02:50:27.565540  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 02:50:27.565620  231933 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 02:50:27.670699  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 02:50:27.670778  231933 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 02:50:27.760468  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 02:50:27.760542  231933 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 02:50:27.841848  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 02:50:27.841928  231933 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 02:50:27.909237  231933 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:50:27.909262  231933 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 02:50:27.941802  231933 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 02:50:31.225373  231933 node_ready.go:49] node "default-k8s-diff-port-403885" is "Ready"
	I0110 02:50:31.225405  231933 node_ready.go:38] duration metric: took 3.89421938s for node "default-k8s-diff-port-403885" to be "Ready" ...
	I0110 02:50:31.225422  231933 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:50:31.225486  231933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:50:32.671326  231933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.378977361s)
	I0110 02:50:32.671381  231933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.33429348s)
	I0110 02:50:32.671628  231933 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.729781586s)
	I0110 02:50:32.671882  231933 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.446379139s)
	I0110 02:50:32.671935  231933 api_server.go:72] duration metric: took 5.964813079s to wait for apiserver process to appear ...
	I0110 02:50:32.671955  231933 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:50:32.672001  231933 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0110 02:50:32.675491  231933 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-403885 addons enable metrics-server
	
	I0110 02:50:32.705472  231933 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 02:50:32.705497  231933 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 02:50:32.736801  231933 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 02:50:29.658710  233009 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-989144:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (4.943552073s)
	I0110 02:50:29.658740  233009 kic.go:203] duration metric: took 4.943695348s to extract preloaded images to volume ...
	W0110 02:50:29.658888  233009 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0110 02:50:29.659019  233009 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 02:50:29.746517  233009 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-989144 --name auto-989144 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-989144 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-989144 --network auto-989144 --ip 192.168.76.2 --volume auto-989144:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 02:50:30.114885  233009 cli_runner.go:164] Run: docker container inspect auto-989144 --format={{.State.Running}}
	I0110 02:50:30.139536  233009 cli_runner.go:164] Run: docker container inspect auto-989144 --format={{.State.Status}}
	I0110 02:50:30.162968  233009 cli_runner.go:164] Run: docker exec auto-989144 stat /var/lib/dpkg/alternatives/iptables
	I0110 02:50:30.235947  233009 oci.go:144] the created container "auto-989144" has a running status.
	I0110 02:50:30.235974  233009 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/auto-989144/id_rsa...
	I0110 02:50:30.338462  233009 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22414-2353/.minikube/machines/auto-989144/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 02:50:30.377563  233009 cli_runner.go:164] Run: docker container inspect auto-989144 --format={{.State.Status}}
	I0110 02:50:30.399396  233009 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 02:50:30.399415  233009 kic_runner.go:114] Args: [docker exec --privileged auto-989144 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 02:50:30.452116  233009 cli_runner.go:164] Run: docker container inspect auto-989144 --format={{.State.Status}}
	I0110 02:50:30.478883  233009 machine.go:94] provisionDockerMachine start ...
	I0110 02:50:30.478964  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:30.515595  233009 main.go:144] libmachine: Using SSH client type: native
	I0110 02:50:30.516004  233009 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0110 02:50:30.516015  233009 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 02:50:30.516583  233009 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46636->127.0.0.1:33098: read: connection reset by peer
	I0110 02:50:32.739710  231933 addons.go:530] duration metric: took 6.032188449s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 02:50:33.172980  231933 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0110 02:50:33.181708  231933 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0110 02:50:33.185361  231933 api_server.go:141] control plane version: v1.35.0
	I0110 02:50:33.185387  231933 api_server.go:131] duration metric: took 513.412882ms to wait for apiserver health ...
	I0110 02:50:33.185397  231933 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:50:33.192607  231933 system_pods.go:59] 8 kube-system pods found
	I0110 02:50:33.192645  231933 system_pods.go:61] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:50:33.192655  231933 system_pods.go:61] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:50:33.192664  231933 system_pods.go:61] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 02:50:33.192671  231933 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:50:33.192681  231933 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:50:33.192688  231933 system_pods.go:61] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 02:50:33.192695  231933 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:50:33.192704  231933 system_pods.go:61] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:50:33.192716  231933 system_pods.go:74] duration metric: took 7.314178ms to wait for pod list to return data ...
	I0110 02:50:33.192730  231933 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:50:33.198163  231933 default_sa.go:45] found service account: "default"
	I0110 02:50:33.198191  231933 default_sa.go:55] duration metric: took 5.455011ms for default service account to be created ...
	I0110 02:50:33.198202  231933 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:50:33.202890  231933 system_pods.go:86] 8 kube-system pods found
	I0110 02:50:33.202925  231933 system_pods.go:89] "coredns-7d764666f9-sck2c" [8791efbb-04ac-4811-983a-9ffaf7bb15be] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:50:33.202936  231933 system_pods.go:89] "etcd-default-k8s-diff-port-403885" [bdc3bfff-d5db-47c9-8f0f-81256caffbcb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 02:50:33.202945  231933 system_pods.go:89] "kindnet-4h8vm" [f5820215-db87-4ed2-99a4-3970efeca785] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 02:50:33.202955  231933 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-403885" [708252d4-cdfc-4c44-bf86-993f95e9cbb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 02:50:33.202963  231933 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-403885" [a6e8ce72-0215-4cdc-b8a5-30863aba7c54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:50:33.202974  231933 system_pods.go:89] "kube-proxy-ss9fs" [61ec02c4-f966-46de-bd2f-81de41f8f1bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 02:50:33.202983  231933 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-403885" [fde84223-af60-46ed-b21b-ccd83712651a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 02:50:33.202992  231933 system_pods.go:89] "storage-provisioner" [83270b0f-e37c-4f07-a380-3ffb6386d492] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:50:33.202999  231933 system_pods.go:126] duration metric: took 4.792004ms to wait for k8s-apps to be running ...
	I0110 02:50:33.203012  231933 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:50:33.203070  231933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:50:33.221022  231933 system_svc.go:56] duration metric: took 18.000698ms WaitForService to wait for kubelet
	I0110 02:50:33.221053  231933 kubeadm.go:587] duration metric: took 6.513939316s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:50:33.221073  231933 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:50:33.226202  231933 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:50:33.226229  231933 node_conditions.go:123] node cpu capacity is 2
	I0110 02:50:33.226251  231933 node_conditions.go:105] duration metric: took 5.172852ms to run NodePressure ...
	I0110 02:50:33.226265  231933 start.go:242] waiting for startup goroutines ...
	I0110 02:50:33.226276  231933 start.go:247] waiting for cluster config update ...
	I0110 02:50:33.226294  231933 start.go:256] writing updated cluster config ...
	I0110 02:50:33.226559  231933 ssh_runner.go:195] Run: rm -f paused
	I0110 02:50:33.230214  231933 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:50:33.235040  231933 pod_ready.go:83] waiting for pod "coredns-7d764666f9-sck2c" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 02:50:35.263767  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	I0110 02:50:33.669237  233009 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-989144
	
	I0110 02:50:33.669262  233009 ubuntu.go:182] provisioning hostname "auto-989144"
	I0110 02:50:33.669357  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:33.687518  233009 main.go:144] libmachine: Using SSH client type: native
	I0110 02:50:33.688030  233009 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0110 02:50:33.688046  233009 main.go:144] libmachine: About to run SSH command:
	sudo hostname auto-989144 && echo "auto-989144" | sudo tee /etc/hostname
	I0110 02:50:33.853127  233009 main.go:144] libmachine: SSH cmd err, output: <nil>: auto-989144
	
	I0110 02:50:33.853225  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:33.872900  233009 main.go:144] libmachine: Using SSH client type: native
	I0110 02:50:33.873221  233009 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0110 02:50:33.873243  233009 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-989144' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-989144/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-989144' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 02:50:34.019976  233009 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 02:50:34.019999  233009 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22414-2353/.minikube CaCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22414-2353/.minikube}
	I0110 02:50:34.020032  233009 ubuntu.go:190] setting up certificates
	I0110 02:50:34.020042  233009 provision.go:84] configureAuth start
	I0110 02:50:34.020098  233009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-989144
	I0110 02:50:34.039297  233009 provision.go:143] copyHostCerts
	I0110 02:50:34.039362  233009 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem, removing ...
	I0110 02:50:34.039371  233009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem
	I0110 02:50:34.039465  233009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/cert.pem (1123 bytes)
	I0110 02:50:34.039562  233009 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem, removing ...
	I0110 02:50:34.039568  233009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem
	I0110 02:50:34.039593  233009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/key.pem (1679 bytes)
	I0110 02:50:34.039643  233009 exec_runner.go:144] found /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem, removing ...
	I0110 02:50:34.039647  233009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem
	I0110 02:50:34.039670  233009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22414-2353/.minikube/ca.pem (1082 bytes)
	I0110 02:50:34.039717  233009 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem org=jenkins.auto-989144 san=[127.0.0.1 192.168.76.2 auto-989144 localhost minikube]
	I0110 02:50:34.373512  233009 provision.go:177] copyRemoteCerts
	I0110 02:50:34.373631  233009 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 02:50:34.373703  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:34.391258  233009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/auto-989144/id_rsa Username:docker}
	I0110 02:50:34.500147  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0110 02:50:34.528285  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0110 02:50:34.548743  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 02:50:34.569326  233009 provision.go:87] duration metric: took 549.271375ms to configureAuth
	I0110 02:50:34.569358  233009 ubuntu.go:206] setting minikube options for container-runtime
	I0110 02:50:34.569601  233009 config.go:182] Loaded profile config "auto-989144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:50:34.569750  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:34.587522  233009 main.go:144] libmachine: Using SSH client type: native
	I0110 02:50:34.587861  233009 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x47b2e0] 0x47d7f0 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0110 02:50:34.587882  233009 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 02:50:34.932347  233009 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 02:50:34.932376  233009 machine.go:97] duration metric: took 4.453474458s to provisionDockerMachine
	I0110 02:50:34.932388  233009 client.go:176] duration metric: took 11.239690885s to LocalClient.Create
	I0110 02:50:34.932403  233009 start.go:167] duration metric: took 11.239738342s to libmachine.API.Create "auto-989144"
	I0110 02:50:34.932410  233009 start.go:293] postStartSetup for "auto-989144" (driver="docker")
	I0110 02:50:34.932420  233009 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 02:50:34.932509  233009 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 02:50:34.932598  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:34.966618  233009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/auto-989144/id_rsa Username:docker}
	I0110 02:50:35.086372  233009 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 02:50:35.090898  233009 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 02:50:35.090939  233009 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 02:50:35.090950  233009 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/addons for local assets ...
	I0110 02:50:35.091006  233009 filesync.go:126] Scanning /home/jenkins/minikube-integration/22414-2353/.minikube/files for local assets ...
	I0110 02:50:35.091091  233009 filesync.go:149] local asset: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem -> 41682.pem in /etc/ssl/certs
	I0110 02:50:35.091193  233009 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 02:50:35.102664  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:50:35.123919  233009 start.go:296] duration metric: took 191.484423ms for postStartSetup
	I0110 02:50:35.124335  233009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-989144
	I0110 02:50:35.148100  233009 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/config.json ...
	I0110 02:50:35.148377  233009 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:50:35.148427  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:35.176519  233009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/auto-989144/id_rsa Username:docker}
	I0110 02:50:35.285543  233009 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 02:50:35.294082  233009 start.go:128] duration metric: took 11.604993259s to createHost
	I0110 02:50:35.294122  233009 start.go:83] releasing machines lock for "auto-989144", held for 11.605112369s
	I0110 02:50:35.294204  233009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-989144
	I0110 02:50:35.313631  233009 ssh_runner.go:195] Run: cat /version.json
	I0110 02:50:35.313798  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:35.313739  233009 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 02:50:35.314095  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:35.338547  233009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/auto-989144/id_rsa Username:docker}
	I0110 02:50:35.361257  233009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/auto-989144/id_rsa Username:docker}
	I0110 02:50:35.572228  233009 ssh_runner.go:195] Run: systemctl --version
	I0110 02:50:35.578678  233009 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 02:50:35.619328  233009 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 02:50:35.623756  233009 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 02:50:35.623912  233009 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 02:50:35.653664  233009 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I0110 02:50:35.653691  233009 start.go:496] detecting cgroup driver to use...
	I0110 02:50:35.653750  233009 detect.go:175] detected "cgroupfs" cgroup driver on host os
	I0110 02:50:35.653832  233009 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 02:50:35.671296  233009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 02:50:35.684072  233009 docker.go:218] disabling cri-docker service (if available) ...
	I0110 02:50:35.684151  233009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 02:50:35.703096  233009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 02:50:35.722524  233009 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 02:50:35.879877  233009 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 02:50:36.015083  233009 docker.go:234] disabling docker service ...
	I0110 02:50:36.015151  233009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 02:50:36.041658  233009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 02:50:36.055624  233009 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 02:50:36.182491  233009 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 02:50:36.317948  233009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 02:50:36.335474  233009 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 02:50:36.366936  233009 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 02:50:36.367050  233009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:36.376961  233009 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0110 02:50:36.377080  233009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:36.389720  233009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:36.410086  233009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:36.428539  233009 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 02:50:36.439095  233009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:36.449651  233009 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:36.472483  233009 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 02:50:36.483483  233009 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 02:50:36.494953  233009 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 02:50:36.503129  233009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:50:36.671859  233009 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 02:50:36.930191  233009 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 02:50:36.930345  233009 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 02:50:36.935502  233009 start.go:574] Will wait 60s for crictl version
	I0110 02:50:36.935609  233009 ssh_runner.go:195] Run: which crictl
	I0110 02:50:36.940917  233009 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 02:50:36.977719  233009 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 02:50:36.977882  233009 ssh_runner.go:195] Run: crio --version
	I0110 02:50:37.020953  233009 ssh_runner.go:195] Run: crio --version
	I0110 02:50:37.063895  233009 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 02:50:37.066886  233009 cli_runner.go:164] Run: docker network inspect auto-989144 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 02:50:37.094081  233009 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 02:50:37.103414  233009 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:50:37.117401  233009 kubeadm.go:884] updating cluster {Name:auto-989144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-989144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 02:50:37.117518  233009 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 02:50:37.117570  233009 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:50:37.172336  233009 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:50:37.172357  233009 crio.go:433] Images already preloaded, skipping extraction
	I0110 02:50:37.172421  233009 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 02:50:37.213419  233009 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 02:50:37.213440  233009 cache_images.go:86] Images are preloaded, skipping loading
	I0110 02:50:37.213448  233009 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 02:50:37.213536  233009 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-989144 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:auto-989144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 02:50:37.213610  233009 ssh_runner.go:195] Run: crio config
	I0110 02:50:37.293430  233009 cni.go:84] Creating CNI manager for ""
	I0110 02:50:37.293499  233009 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:50:37.293533  233009 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 02:50:37.293586  233009 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-989144 NodeName:auto-989144 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock failCgroupV1:false hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 02:50:37.293744  233009 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-989144"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	failCgroupV1: false
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 02:50:37.293853  233009 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 02:50:37.303978  233009 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 02:50:37.304093  233009 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 02:50:37.311585  233009 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0110 02:50:37.325816  233009 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 02:50:37.339175  233009 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I0110 02:50:37.352531  233009 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 02:50:37.356597  233009 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 02:50:37.367097  233009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:50:37.541126  233009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:50:37.559156  233009 certs.go:69] Setting up /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144 for IP: 192.168.76.2
	I0110 02:50:37.559225  233009 certs.go:195] generating shared ca certs ...
	I0110 02:50:37.559256  233009 certs.go:227] acquiring lock for ca certs: {Name:mk60bb8578732e3e8efcf11c7d77fb48585828c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:37.559443  233009 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key
	I0110 02:50:37.559520  233009 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key
	I0110 02:50:37.559555  233009 certs.go:257] generating profile certs ...
	I0110 02:50:37.559635  233009 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.key
	I0110 02:50:37.559673  233009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.crt with IP's: []
	I0110 02:50:37.700016  233009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.crt ...
	I0110 02:50:37.700088  233009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.crt: {Name:mk0a9f6799306a45f75bc2d4088c8485af031457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:37.700535  233009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.key ...
	I0110 02:50:37.700572  233009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.key: {Name:mkf40f0c89aa73969267378f93f8c575543af9ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:37.700731  233009 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.key.f5155a44
	I0110 02:50:37.700770  233009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.crt.f5155a44 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0110 02:50:37.850411  233009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.crt.f5155a44 ...
	I0110 02:50:37.850501  233009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.crt.f5155a44: {Name:mkbd845ee331e9f8d1247393de0522d1df7142cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:37.850696  233009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.key.f5155a44 ...
	I0110 02:50:37.850735  233009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.key.f5155a44: {Name:mk82f002d186601567c317dd3ff1c5384ef7f9c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:37.850872  233009 certs.go:382] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.crt.f5155a44 -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.crt
	I0110 02:50:37.851006  233009 certs.go:386] copying /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.key.f5155a44 -> /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.key
	I0110 02:50:37.851095  233009 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/proxy-client.key
	I0110 02:50:37.851145  233009 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/proxy-client.crt with IP's: []
	I0110 02:50:38.217239  233009 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/proxy-client.crt ...
	I0110 02:50:38.217350  233009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/proxy-client.crt: {Name:mk8ea00254b59891f4ff96a5b6d421200881489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:38.217555  233009 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/proxy-client.key ...
	I0110 02:50:38.217588  233009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/proxy-client.key: {Name:mke62927631ff9bf11265f49d98f7f1566e865a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:38.217820  233009 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem (1338 bytes)
	W0110 02:50:38.217884  233009 certs.go:480] ignoring /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168_empty.pem, impossibly tiny 0 bytes
	I0110 02:50:38.217908  233009 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca-key.pem (1679 bytes)
	I0110 02:50:38.217967  233009 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/ca.pem (1082 bytes)
	I0110 02:50:38.218022  233009 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/cert.pem (1123 bytes)
	I0110 02:50:38.218082  233009 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/certs/key.pem (1679 bytes)
	I0110 02:50:38.218153  233009 certs.go:484] found cert: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem (1708 bytes)
	I0110 02:50:38.218762  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 02:50:38.240774  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0110 02:50:38.257427  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 02:50:38.275230  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 02:50:38.292459  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0110 02:50:38.310971  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 02:50:38.328327  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 02:50:38.344613  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 02:50:38.361182  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 02:50:38.379046  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/certs/4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I0110 02:50:38.396303  233009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/ssl/certs/41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I0110 02:50:38.413024  233009 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 02:50:38.437826  233009 ssh_runner.go:195] Run: openssl version
	I0110 02:50:38.448439  233009 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I0110 02:50:38.464397  233009 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I0110 02:50:38.478014  233009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I0110 02:50:38.482644  233009 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 01:58 /usr/share/ca-certificates/41682.pem
	I0110 02:50:38.482783  233009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I0110 02:50:38.550460  233009 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 02:50:38.559244  233009 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41682.pem /etc/ssl/certs/3ec20f2e.0
	I0110 02:50:38.567372  233009 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:50:38.575096  233009 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 02:50:38.583096  233009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:50:38.587376  233009 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 01:54 /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:50:38.587512  233009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 02:50:38.632036  233009 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 02:50:38.640470  233009 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 02:50:38.648296  233009 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I0110 02:50:38.655999  233009 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I0110 02:50:38.663977  233009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I0110 02:50:38.668099  233009 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 01:58 /usr/share/ca-certificates/4168.pem
	I0110 02:50:38.668211  233009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I0110 02:50:38.711597  233009 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 02:50:38.720059  233009 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4168.pem /etc/ssl/certs/51391683.0
	I0110 02:50:38.728362  233009 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 02:50:38.732733  233009 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 02:50:38.732832  233009 kubeadm.go:401] StartCluster: {Name:auto-989144 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:auto-989144 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:50:38.732946  233009 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 02:50:38.733038  233009 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 02:50:38.763561  233009 cri.go:96] found id: ""
	I0110 02:50:38.763669  233009 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 02:50:38.798195  233009 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 02:50:38.809350  233009 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 02:50:38.809409  233009 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 02:50:38.824285  233009 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 02:50:38.824302  233009 kubeadm.go:158] found existing configuration files:
	
	I0110 02:50:38.824353  233009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 02:50:38.836812  233009 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 02:50:38.836962  233009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 02:50:38.846963  233009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 02:50:38.858702  233009 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 02:50:38.858806  233009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 02:50:38.868944  233009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 02:50:38.878794  233009 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 02:50:38.878948  233009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 02:50:38.887137  233009 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 02:50:38.896216  233009 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 02:50:38.896368  233009 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 02:50:38.905714  233009 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 02:50:38.953342  233009 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 02:50:38.953983  233009 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 02:50:39.048375  233009 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 02:50:39.048493  233009 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I0110 02:50:39.048592  233009 kubeadm.go:319] OS: Linux
	I0110 02:50:39.048693  233009 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 02:50:39.048761  233009 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I0110 02:50:39.048829  233009 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 02:50:39.048908  233009 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 02:50:39.048980  233009 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 02:50:39.049073  233009 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 02:50:39.049154  233009 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 02:50:39.049210  233009 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 02:50:39.049263  233009 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I0110 02:50:39.127427  233009 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 02:50:39.127611  233009 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 02:50:39.127743  233009 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 02:50:39.141871  233009 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W0110 02:50:37.742994  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	W0110 02:50:40.241180  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	I0110 02:50:39.151113  233009 out.go:252]   - Generating certificates and keys ...
	I0110 02:50:39.151282  233009 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 02:50:39.151397  233009 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 02:50:39.237547  233009 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 02:50:39.719322  233009 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 02:50:40.210638  233009 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 02:50:40.309467  233009 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 02:50:40.573329  233009 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 02:50:40.573888  233009 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-989144 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:50:41.113968  233009 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 02:50:41.114540  233009 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-989144 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0110 02:50:41.549824  233009 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 02:50:41.994139  233009 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 02:50:42.257565  233009 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 02:50:42.258218  233009 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 02:50:42.404806  233009 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 02:50:42.665855  233009 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 02:50:43.316537  233009 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 02:50:44.154938  233009 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 02:50:44.309092  233009 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 02:50:44.309191  233009 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 02:50:44.313069  233009 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W0110 02:50:42.246121  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	W0110 02:50:44.740965  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	W0110 02:50:46.741115  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	I0110 02:50:44.316448  233009 out.go:252]   - Booting up control plane ...
	I0110 02:50:44.316549  233009 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 02:50:44.316626  233009 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 02:50:44.316694  233009 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 02:50:44.341787  233009 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 02:50:44.341920  233009 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 02:50:44.355565  233009 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 02:50:44.355670  233009 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 02:50:44.355716  233009 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 02:50:44.509113  233009 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 02:50:44.509290  233009 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 02:50:45.512582  233009 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.003422607s
	I0110 02:50:45.521304  233009 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 02:50:45.522867  233009 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0110 02:50:45.523483  233009 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 02:50:45.524144  233009 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W0110 02:50:49.242356  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	W0110 02:50:51.741078  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	I0110 02:50:49.048301  233009 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.52403287s
	I0110 02:50:50.691148  233009 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.166941147s
	I0110 02:50:52.525451  233009 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001733004s
	I0110 02:50:52.576585  233009 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 02:50:52.600245  233009 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 02:50:52.616523  233009 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 02:50:52.617526  233009 kubeadm.go:319] [mark-control-plane] Marking the node auto-989144 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 02:50:52.639765  233009 kubeadm.go:319] [bootstrap-token] Using token: qnk1al.g7fjbz0nbykrrgx8
	I0110 02:50:52.642792  233009 out.go:252]   - Configuring RBAC rules ...
	I0110 02:50:52.642923  233009 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 02:50:52.651900  233009 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 02:50:52.662839  233009 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 02:50:52.669844  233009 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 02:50:52.683695  233009 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 02:50:52.688867  233009 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 02:50:52.942521  233009 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 02:50:53.410352  233009 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 02:50:53.942334  233009 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 02:50:53.944763  233009 kubeadm.go:319] 
	I0110 02:50:53.944842  233009 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 02:50:53.944847  233009 kubeadm.go:319] 
	I0110 02:50:53.944936  233009 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 02:50:53.944940  233009 kubeadm.go:319] 
	I0110 02:50:53.944965  233009 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 02:50:53.945024  233009 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 02:50:53.945075  233009 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 02:50:53.945079  233009 kubeadm.go:319] 
	I0110 02:50:53.945133  233009 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 02:50:53.945137  233009 kubeadm.go:319] 
	I0110 02:50:53.945190  233009 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 02:50:53.945195  233009 kubeadm.go:319] 
	I0110 02:50:53.945246  233009 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 02:50:53.945321  233009 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 02:50:53.945389  233009 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 02:50:53.945394  233009 kubeadm.go:319] 
	I0110 02:50:53.945477  233009 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 02:50:53.945557  233009 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 02:50:53.945563  233009 kubeadm.go:319] 
	I0110 02:50:53.945648  233009 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qnk1al.g7fjbz0nbykrrgx8 \
	I0110 02:50:53.945751  233009 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e531c0ad449fbe8263f118458e677222d5af5abf33ad8c77cbc1152855f23f5 \
	I0110 02:50:53.945771  233009 kubeadm.go:319] 	--control-plane 
	I0110 02:50:53.945787  233009 kubeadm.go:319] 
	I0110 02:50:53.945873  233009 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 02:50:53.945877  233009 kubeadm.go:319] 
	I0110 02:50:53.945959  233009 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qnk1al.g7fjbz0nbykrrgx8 \
	I0110 02:50:53.946061  233009 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2e531c0ad449fbe8263f118458e677222d5af5abf33ad8c77cbc1152855f23f5 
	I0110 02:50:53.949038  233009 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0110 02:50:53.949488  233009 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I0110 02:50:53.949621  233009 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 02:50:53.949666  233009 cni.go:84] Creating CNI manager for ""
	I0110 02:50:53.949679  233009 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 02:50:53.954876  233009 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W0110 02:50:54.241007  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	W0110 02:50:56.740300  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	I0110 02:50:53.957892  233009 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 02:50:53.963976  233009 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 02:50:53.963999  233009 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 02:50:53.985973  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 02:50:54.695080  233009 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 02:50:54.695211  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:50:54.695293  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-989144 minikube.k8s.io/updated_at=2026_01_10T02_50_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510 minikube.k8s.io/name=auto-989144 minikube.k8s.io/primary=true
	I0110 02:50:54.877104  233009 ops.go:34] apiserver oom_adj: -16
	I0110 02:50:54.877230  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:50:55.378240  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:50:55.878023  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:50:56.378336  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:50:56.878301  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:50:57.378140  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:50:57.877894  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:50:58.377461  233009 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 02:50:58.527279  233009 kubeadm.go:1114] duration metric: took 3.832106628s to wait for elevateKubeSystemPrivileges
	I0110 02:50:58.527326  233009 kubeadm.go:403] duration metric: took 19.794495871s to StartCluster
	I0110 02:50:58.527345  233009 settings.go:142] acquiring lock: {Name:mkf19ab3097ad600bdb33c5b47b20062ddaabac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:58.527438  233009 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:50:58.528547  233009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/kubeconfig: {Name:mk42056a1d6ed22c80c2fafbcccda1502d4b18e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 02:50:58.530478  233009 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 02:50:58.530807  233009 config.go:182] Loaded profile config "auto-989144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:50:58.531040  233009 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 02:50:58.531105  233009 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 02:50:58.531311  233009 addons.go:70] Setting storage-provisioner=true in profile "auto-989144"
	I0110 02:50:58.531344  233009 addons.go:239] Setting addon storage-provisioner=true in "auto-989144"
	I0110 02:50:58.531370  233009 host.go:66] Checking if "auto-989144" exists ...
	I0110 02:50:58.531893  233009 cli_runner.go:164] Run: docker container inspect auto-989144 --format={{.State.Status}}
	I0110 02:50:58.532249  233009 addons.go:70] Setting default-storageclass=true in profile "auto-989144"
	I0110 02:50:58.532274  233009 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-989144"
	I0110 02:50:58.532608  233009 cli_runner.go:164] Run: docker container inspect auto-989144 --format={{.State.Status}}
	I0110 02:50:58.535197  233009 out.go:179] * Verifying Kubernetes components...
	I0110 02:50:58.538946  233009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 02:50:58.572034  233009 addons.go:239] Setting addon default-storageclass=true in "auto-989144"
	I0110 02:50:58.572077  233009 host.go:66] Checking if "auto-989144" exists ...
	I0110 02:50:58.572596  233009 cli_runner.go:164] Run: docker container inspect auto-989144 --format={{.State.Status}}
	I0110 02:50:58.584970  233009 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 02:50:58.587910  233009 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:50:58.587933  233009 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 02:50:58.587996  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:58.610751  233009 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 02:50:58.610772  233009 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 02:50:58.610849  233009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-989144
	I0110 02:50:58.641590  233009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/auto-989144/id_rsa Username:docker}
	I0110 02:50:58.664031  233009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/auto-989144/id_rsa Username:docker}
	I0110 02:50:58.896648  233009 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 02:50:58.956224  233009 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 02:50:58.956399  233009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 02:50:58.985883  233009 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 02:50:59.733063  233009 start.go:987] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0110 02:50:59.734162  233009 node_ready.go:35] waiting up to 15m0s for node "auto-989144" to be "Ready" ...
	I0110 02:50:59.802412  233009 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W0110 02:50:58.743154  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	W0110 02:51:01.244052  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	I0110 02:50:59.805204  233009 addons.go:530] duration metric: took 1.274091736s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0110 02:51:00.247559  233009 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-989144" context rescaled to 1 replicas
	W0110 02:51:01.741664  233009 node_ready.go:57] node "auto-989144" has "Ready":"False" status (will retry)
	W0110 02:51:03.742138  231933 pod_ready.go:104] pod "coredns-7d764666f9-sck2c" is not "Ready", error: <nil>
	I0110 02:51:05.741594  231933 pod_ready.go:94] pod "coredns-7d764666f9-sck2c" is "Ready"
	I0110 02:51:05.741625  231933 pod_ready.go:86] duration metric: took 32.50655955s for pod "coredns-7d764666f9-sck2c" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:05.744399  231933 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:05.750171  231933 pod_ready.go:94] pod "etcd-default-k8s-diff-port-403885" is "Ready"
	I0110 02:51:05.750195  231933 pod_ready.go:86] duration metric: took 5.768233ms for pod "etcd-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:05.752470  231933 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:05.757238  231933 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-403885" is "Ready"
	I0110 02:51:05.757261  231933 pod_ready.go:86] duration metric: took 4.76712ms for pod "kube-apiserver-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:05.759316  231933 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:05.938558  231933 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-403885" is "Ready"
	I0110 02:51:05.938586  231933 pod_ready.go:86] duration metric: took 179.251344ms for pod "kube-controller-manager-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:06.138739  231933 pod_ready.go:83] waiting for pod "kube-proxy-ss9fs" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:06.538736  231933 pod_ready.go:94] pod "kube-proxy-ss9fs" is "Ready"
	I0110 02:51:06.538760  231933 pod_ready.go:86] duration metric: took 399.996046ms for pod "kube-proxy-ss9fs" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:06.739273  231933 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:07.138506  231933 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-403885" is "Ready"
	I0110 02:51:07.138532  231933 pod_ready.go:86] duration metric: took 399.230955ms for pod "kube-scheduler-default-k8s-diff-port-403885" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:07.138545  231933 pod_ready.go:40] duration metric: took 33.908287639s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:51:07.198377  231933 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 02:51:07.201478  231933 out.go:203] 
	W0110 02:51:07.204318  231933 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 02:51:07.207229  231933 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:51:07.210084  231933 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-403885" cluster and "default" namespace by default
	W0110 02:51:04.237909  233009 node_ready.go:57] node "auto-989144" has "Ready":"False" status (will retry)
	W0110 02:51:06.739702  233009 node_ready.go:57] node "auto-989144" has "Ready":"False" status (will retry)
	W0110 02:51:09.237549  233009 node_ready.go:57] node "auto-989144" has "Ready":"False" status (will retry)
	W0110 02:51:11.737555  233009 node_ready.go:57] node "auto-989144" has "Ready":"False" status (will retry)
	I0110 02:51:12.742381  233009 node_ready.go:49] node "auto-989144" is "Ready"
	I0110 02:51:12.742409  233009 node_ready.go:38] duration metric: took 13.00818533s for node "auto-989144" to be "Ready" ...
	I0110 02:51:12.742428  233009 api_server.go:52] waiting for apiserver process to appear ...
	I0110 02:51:12.742480  233009 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:51:12.755022  233009 api_server.go:72] duration metric: took 14.22381075s to wait for apiserver process to appear ...
	I0110 02:51:12.755046  233009 api_server.go:88] waiting for apiserver healthz status ...
	I0110 02:51:12.755064  233009 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 02:51:12.764677  233009 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 02:51:12.766870  233009 api_server.go:141] control plane version: v1.35.0
	I0110 02:51:12.766941  233009 api_server.go:131] duration metric: took 11.888376ms to wait for apiserver health ...
	I0110 02:51:12.766965  233009 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 02:51:12.771220  233009 system_pods.go:59] 8 kube-system pods found
	I0110 02:51:12.771300  233009 system_pods.go:61] "coredns-7d764666f9-n982k" [877f2d3b-23c5-49a4-9a27-adab42fe2451] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:51:12.771322  233009 system_pods.go:61] "etcd-auto-989144" [e2f0ccc2-a620-4ebd-8852-0e2022be1ce6] Running
	I0110 02:51:12.771362  233009 system_pods.go:61] "kindnet-ktmk2" [962be512-25e1-4c67-a7b5-f21dac1ac303] Running
	I0110 02:51:12.771383  233009 system_pods.go:61] "kube-apiserver-auto-989144" [9dce557d-11a9-406e-a89f-45347e73cfa8] Running
	I0110 02:51:12.771404  233009 system_pods.go:61] "kube-controller-manager-auto-989144" [2744eb18-2b2a-4bbb-b271-07eaccb0f0fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:51:12.771426  233009 system_pods.go:61] "kube-proxy-l9j6v" [3905bb5b-03f0-4eac-8478-4a972acad6cb] Running
	I0110 02:51:12.771463  233009 system_pods.go:61] "kube-scheduler-auto-989144" [0a34810d-f7ec-4d9c-8a8c-dcb5ae67683c] Running
	I0110 02:51:12.771492  233009 system_pods.go:61] "storage-provisioner" [c0929b7c-b8da-4b7d-b8e9-ccfe9e1bc5a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:51:12.771524  233009 system_pods.go:74] duration metric: took 4.540394ms to wait for pod list to return data ...
	I0110 02:51:12.771552  233009 default_sa.go:34] waiting for default service account to be created ...
	I0110 02:51:12.778277  233009 default_sa.go:45] found service account: "default"
	I0110 02:51:12.778359  233009 default_sa.go:55] duration metric: took 6.787472ms for default service account to be created ...
	I0110 02:51:12.778384  233009 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 02:51:12.784214  233009 system_pods.go:86] 8 kube-system pods found
	I0110 02:51:12.784294  233009 system_pods.go:89] "coredns-7d764666f9-n982k" [877f2d3b-23c5-49a4-9a27-adab42fe2451] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:51:12.784320  233009 system_pods.go:89] "etcd-auto-989144" [e2f0ccc2-a620-4ebd-8852-0e2022be1ce6] Running
	I0110 02:51:12.784356  233009 system_pods.go:89] "kindnet-ktmk2" [962be512-25e1-4c67-a7b5-f21dac1ac303] Running
	I0110 02:51:12.784379  233009 system_pods.go:89] "kube-apiserver-auto-989144" [9dce557d-11a9-406e-a89f-45347e73cfa8] Running
	I0110 02:51:12.784405  233009 system_pods.go:89] "kube-controller-manager-auto-989144" [2744eb18-2b2a-4bbb-b271-07eaccb0f0fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:51:12.784441  233009 system_pods.go:89] "kube-proxy-l9j6v" [3905bb5b-03f0-4eac-8478-4a972acad6cb] Running
	I0110 02:51:12.784466  233009 system_pods.go:89] "kube-scheduler-auto-989144" [0a34810d-f7ec-4d9c-8a8c-dcb5ae67683c] Running
	I0110 02:51:12.784500  233009 system_pods.go:89] "storage-provisioner" [c0929b7c-b8da-4b7d-b8e9-ccfe9e1bc5a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:51:12.784563  233009 retry.go:84] will retry after 300ms: missing components: kube-dns
	I0110 02:51:13.085974  233009 system_pods.go:86] 8 kube-system pods found
	I0110 02:51:13.086010  233009 system_pods.go:89] "coredns-7d764666f9-n982k" [877f2d3b-23c5-49a4-9a27-adab42fe2451] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:51:13.086017  233009 system_pods.go:89] "etcd-auto-989144" [e2f0ccc2-a620-4ebd-8852-0e2022be1ce6] Running
	I0110 02:51:13.086024  233009 system_pods.go:89] "kindnet-ktmk2" [962be512-25e1-4c67-a7b5-f21dac1ac303] Running
	I0110 02:51:13.086071  233009 system_pods.go:89] "kube-apiserver-auto-989144" [9dce557d-11a9-406e-a89f-45347e73cfa8] Running
	I0110 02:51:13.086079  233009 system_pods.go:89] "kube-controller-manager-auto-989144" [2744eb18-2b2a-4bbb-b271-07eaccb0f0fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:51:13.086084  233009 system_pods.go:89] "kube-proxy-l9j6v" [3905bb5b-03f0-4eac-8478-4a972acad6cb] Running
	I0110 02:51:13.086093  233009 system_pods.go:89] "kube-scheduler-auto-989144" [0a34810d-f7ec-4d9c-8a8c-dcb5ae67683c] Running
	I0110 02:51:13.086100  233009 system_pods.go:89] "storage-provisioner" [c0929b7c-b8da-4b7d-b8e9-ccfe9e1bc5a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:51:13.341772  233009 system_pods.go:86] 8 kube-system pods found
	I0110 02:51:13.341804  233009 system_pods.go:89] "coredns-7d764666f9-n982k" [877f2d3b-23c5-49a4-9a27-adab42fe2451] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 02:51:13.341833  233009 system_pods.go:89] "etcd-auto-989144" [e2f0ccc2-a620-4ebd-8852-0e2022be1ce6] Running
	I0110 02:51:13.341853  233009 system_pods.go:89] "kindnet-ktmk2" [962be512-25e1-4c67-a7b5-f21dac1ac303] Running
	I0110 02:51:13.341859  233009 system_pods.go:89] "kube-apiserver-auto-989144" [9dce557d-11a9-406e-a89f-45347e73cfa8] Running
	I0110 02:51:13.341865  233009 system_pods.go:89] "kube-controller-manager-auto-989144" [2744eb18-2b2a-4bbb-b271-07eaccb0f0fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 02:51:13.341870  233009 system_pods.go:89] "kube-proxy-l9j6v" [3905bb5b-03f0-4eac-8478-4a972acad6cb] Running
	I0110 02:51:13.341876  233009 system_pods.go:89] "kube-scheduler-auto-989144" [0a34810d-f7ec-4d9c-8a8c-dcb5ae67683c] Running
	I0110 02:51:13.341886  233009 system_pods.go:89] "storage-provisioner" [c0929b7c-b8da-4b7d-b8e9-ccfe9e1bc5a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 02:51:13.824209  233009 system_pods.go:86] 8 kube-system pods found
	I0110 02:51:13.824236  233009 system_pods.go:89] "coredns-7d764666f9-n982k" [877f2d3b-23c5-49a4-9a27-adab42fe2451] Running
	I0110 02:51:13.824249  233009 system_pods.go:89] "etcd-auto-989144" [e2f0ccc2-a620-4ebd-8852-0e2022be1ce6] Running
	I0110 02:51:13.824255  233009 system_pods.go:89] "kindnet-ktmk2" [962be512-25e1-4c67-a7b5-f21dac1ac303] Running
	I0110 02:51:13.824259  233009 system_pods.go:89] "kube-apiserver-auto-989144" [9dce557d-11a9-406e-a89f-45347e73cfa8] Running
	I0110 02:51:13.824266  233009 system_pods.go:89] "kube-controller-manager-auto-989144" [2744eb18-2b2a-4bbb-b271-07eaccb0f0fc] Running
	I0110 02:51:13.824271  233009 system_pods.go:89] "kube-proxy-l9j6v" [3905bb5b-03f0-4eac-8478-4a972acad6cb] Running
	I0110 02:51:13.824276  233009 system_pods.go:89] "kube-scheduler-auto-989144" [0a34810d-f7ec-4d9c-8a8c-dcb5ae67683c] Running
	I0110 02:51:13.824280  233009 system_pods.go:89] "storage-provisioner" [c0929b7c-b8da-4b7d-b8e9-ccfe9e1bc5a1] Running
	I0110 02:51:13.824287  233009 system_pods.go:126] duration metric: took 1.045884994s to wait for k8s-apps to be running ...
	I0110 02:51:13.824295  233009 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 02:51:13.824349  233009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:51:13.838729  233009 system_svc.go:56] duration metric: took 14.423825ms WaitForService to wait for kubelet
	I0110 02:51:13.838759  233009 kubeadm.go:587] duration metric: took 15.307551856s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 02:51:13.838779  233009 node_conditions.go:102] verifying NodePressure condition ...
	I0110 02:51:13.841957  233009 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0110 02:51:13.842026  233009 node_conditions.go:123] node cpu capacity is 2
	I0110 02:51:13.842046  233009 node_conditions.go:105] duration metric: took 3.261035ms to run NodePressure ...
	I0110 02:51:13.842060  233009 start.go:242] waiting for startup goroutines ...
	I0110 02:51:13.842068  233009 start.go:247] waiting for cluster config update ...
	I0110 02:51:13.842079  233009 start.go:256] writing updated cluster config ...
	I0110 02:51:13.842382  233009 ssh_runner.go:195] Run: rm -f paused
	I0110 02:51:13.846419  233009 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:51:13.849777  233009 pod_ready.go:83] waiting for pod "coredns-7d764666f9-n982k" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:13.853863  233009 pod_ready.go:94] pod "coredns-7d764666f9-n982k" is "Ready"
	I0110 02:51:13.853926  233009 pod_ready.go:86] duration metric: took 4.122509ms for pod "coredns-7d764666f9-n982k" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:13.856207  233009 pod_ready.go:83] waiting for pod "etcd-auto-989144" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:13.859978  233009 pod_ready.go:94] pod "etcd-auto-989144" is "Ready"
	I0110 02:51:13.860000  233009 pod_ready.go:86] duration metric: took 3.772537ms for pod "etcd-auto-989144" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:13.862859  233009 pod_ready.go:83] waiting for pod "kube-apiserver-auto-989144" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:13.866799  233009 pod_ready.go:94] pod "kube-apiserver-auto-989144" is "Ready"
	I0110 02:51:13.866825  233009 pod_ready.go:86] duration metric: took 3.945496ms for pod "kube-apiserver-auto-989144" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:13.868987  233009 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-989144" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:14.250056  233009 pod_ready.go:94] pod "kube-controller-manager-auto-989144" is "Ready"
	I0110 02:51:14.250089  233009 pod_ready.go:86] duration metric: took 381.079052ms for pod "kube-controller-manager-auto-989144" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:14.450578  233009 pod_ready.go:83] waiting for pod "kube-proxy-l9j6v" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:14.850707  233009 pod_ready.go:94] pod "kube-proxy-l9j6v" is "Ready"
	I0110 02:51:14.850783  233009 pod_ready.go:86] duration metric: took 400.175199ms for pod "kube-proxy-l9j6v" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:15.051054  233009 pod_ready.go:83] waiting for pod "kube-scheduler-auto-989144" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:15.450683  233009 pod_ready.go:94] pod "kube-scheduler-auto-989144" is "Ready"
	I0110 02:51:15.450719  233009 pod_ready.go:86] duration metric: took 399.636103ms for pod "kube-scheduler-auto-989144" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 02:51:15.450733  233009 pod_ready.go:40] duration metric: took 1.60428708s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 02:51:15.510042  233009 start.go:625] kubectl: 1.33.2, cluster: 1.35.0 (minor skew: 2)
	I0110 02:51:15.513130  233009 out.go:203] 
	W0110 02:51:15.516640  233009 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0.
	I0110 02:51:15.519559  233009 out.go:179]   - Want kubectl v1.35.0? Try 'minikube kubectl -- get pods -A'
	I0110 02:51:15.522469  233009 out.go:179] * Done! kubectl is now configured to use "auto-989144" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 02:51:03 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:03.607215983Z" level=info msg="Started container" PID=1694 containerID=910d0dab6a77a8bbc264e0daefb03ba43c5628340ff9b5c13ebb7ce7186fb91b description=kube-system/storage-provisioner/storage-provisioner id=91f2cf90-273e-4a96-ada8-e32de6c6faf8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2c35a4d3fd7dd89cc874840b3e55ea477b9a709b141d944875371a2b12088994
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.049785085Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.049821309Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.054783378Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.054815516Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.05880632Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.058962189Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.063360243Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.063393465Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.063452426Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.067269835Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 02:51:13 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:13.067307142Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.361139152Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7d2f8673-9fa6-4afb-8ad6-cbbe4ae607d6 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.363272905Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=24b52c6d-ef16-4d27-9387-bd29605e0774 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.364601625Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh/dashboard-metrics-scraper" id=5c085bbb-3b80-4a8f-915b-ed36916f86c2 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.364704727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.377031312Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.37765765Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.402445675Z" level=info msg="Created container cea7cb71f60a3bf1cc0f92af289dd1a12432224c4eec12b146d182a413e0d79a: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh/dashboard-metrics-scraper" id=5c085bbb-3b80-4a8f-915b-ed36916f86c2 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.40350483Z" level=info msg="Starting container: cea7cb71f60a3bf1cc0f92af289dd1a12432224c4eec12b146d182a413e0d79a" id=0d2c95ea-1f97-47d1-bc92-7477266ba35a name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.410560284Z" level=info msg="Started container" PID=1773 containerID=cea7cb71f60a3bf1cc0f92af289dd1a12432224c4eec12b146d182a413e0d79a description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh/dashboard-metrics-scraper id=0d2c95ea-1f97-47d1-bc92-7477266ba35a name=/runtime.v1.RuntimeService/StartContainer sandboxID=98421345179a1438c45812935ad55ee036c9c84f17a862507655df21397da3b4
	Jan 10 02:51:16 default-k8s-diff-port-403885 conmon[1771]: conmon cea7cb71f60a3bf1cc0f <ninfo>: container 1773 exited with status 1
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.618767145Z" level=info msg="Removing container: a8bfb9653382e887ee3cf9c6de048fbe30fabcad1d3c5e5fc7a07a62a94a0f5c" id=1bd78e4a-5b3f-479b-8f1c-1c20f29c6f1c name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.627393929Z" level=info msg="Error loading conmon cgroup of container a8bfb9653382e887ee3cf9c6de048fbe30fabcad1d3c5e5fc7a07a62a94a0f5c: cgroup deleted" id=1bd78e4a-5b3f-479b-8f1c-1c20f29c6f1c name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 02:51:16 default-k8s-diff-port-403885 crio[663]: time="2026-01-10T02:51:16.632024322Z" level=info msg="Removed container a8bfb9653382e887ee3cf9c6de048fbe30fabcad1d3c5e5fc7a07a62a94a0f5c: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh/dashboard-metrics-scraper" id=1bd78e4a-5b3f-479b-8f1c-1c20f29c6f1c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	cea7cb71f60a3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago       Exited              dashboard-metrics-scraper   3                   98421345179a1       dashboard-metrics-scraper-867fb5f87b-ngnzh             kubernetes-dashboard
	910d0dab6a77a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago      Running             storage-provisioner         2                   2c35a4d3fd7dd       storage-provisioner                                    kube-system
	c189fb7bad01f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   35 seconds ago      Running             kubernetes-dashboard        0                   9c44612b5f45d       kubernetes-dashboard-b84665fb8-l5llr                   kubernetes-dashboard
	fa1837feaa39e       e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf                                           52 seconds ago      Running             coredns                     1                   879d5ffee7c26       coredns-7d764666f9-sck2c                               kube-system
	b6acf28c89529       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   08dcbe9c1056c       busybox                                                default
	bc847f22b7096       c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13                                           52 seconds ago      Running             kindnet-cni                 1                   817278ab3546b       kindnet-4h8vm                                          kube-system
	b162bf1f9ee4d       de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5                                           52 seconds ago      Running             kube-proxy                  1                   606888b668327       kube-proxy-ss9fs                                       kube-system
	f3a4dab3b3499       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   2c35a4d3fd7dd       storage-provisioner                                    kube-system
	ccaef7514d5ac       c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856                                           59 seconds ago      Running             kube-apiserver              1                   089af88f8330e       kube-apiserver-default-k8s-diff-port-403885            kube-system
	0a4524e2475eb       271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57                                           59 seconds ago      Running             etcd                        1                   ec3eacb0b9c47       etcd-default-k8s-diff-port-403885                      kube-system
	73f1ff6161183       ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f                                           59 seconds ago      Running             kube-scheduler              1                   c2f50ce98fc5a       kube-scheduler-default-k8s-diff-port-403885            kube-system
	3eef2c483d9e9       88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0                                           59 seconds ago      Running             kube-controller-manager     1                   024569b12df18       kube-controller-manager-default-k8s-diff-port-403885   kube-system
	
	
	==> coredns [fa1837feaa39e020154dcf1fa0e3cfdcc389657a13b2a7911522e055b8d5c205] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/arm64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:53725 - 54010 "HINFO IN 2023192538793103702.2930508901437709455. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.065750182s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-403885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-403885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=988620e39e58c582faf79ad14beb1651931f6510
	                    minikube.k8s.io/name=default-k8s-diff-port-403885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T02_49_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 02:49:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-403885
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 02:51:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 02:51:02 +0000   Sat, 10 Jan 2026 02:49:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 02:51:02 +0000   Sat, 10 Jan 2026 02:49:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 02:51:02 +0000   Sat, 10 Jan 2026 02:49:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 02:51:02 +0000   Sat, 10 Jan 2026 02:49:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-403885
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c099f7e34d7e60970eda9f1e6960b094
	  System UUID:                c5be33d9-0382-423b-9b90-3c979c14f2d9
	  Boot ID:                    41f52236-76b9-4dc8-bfe3-121fbdeb9659
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-7d764666f9-sck2c                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-default-k8s-diff-port-403885                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         117s
	  kube-system                 kindnet-4h8vm                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-default-k8s-diff-port-403885             250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-403885    200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-ss9fs                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-default-k8s-diff-port-403885             100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-ngnzh              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-l5llr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  113s  node-controller  Node default-k8s-diff-port-403885 event: Registered Node default-k8s-diff-port-403885 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node default-k8s-diff-port-403885 event: Registered Node default-k8s-diff-port-403885 in Controller
	
	
	==> dmesg <==
	[ +23.212409] overlayfs: idmapped layers are currently not supported
	[  +9.866169] overlayfs: idmapped layers are currently not supported
	[Jan10 02:19] overlayfs: idmapped layers are currently not supported
	[Jan10 02:20] overlayfs: idmapped layers are currently not supported
	[ +35.960240] overlayfs: idmapped layers are currently not supported
	[Jan10 02:21] overlayfs: idmapped layers are currently not supported
	[Jan10 02:23] overlayfs: idmapped layers are currently not supported
	[Jan10 02:24] overlayfs: idmapped layers are currently not supported
	[Jan10 02:25] overlayfs: idmapped layers are currently not supported
	[Jan10 02:30] overlayfs: idmapped layers are currently not supported
	[Jan10 02:31] overlayfs: idmapped layers are currently not supported
	[Jan10 02:35] overlayfs: idmapped layers are currently not supported
	[Jan10 02:37] overlayfs: idmapped layers are currently not supported
	[Jan10 02:41] overlayfs: idmapped layers are currently not supported
	[ +37.534021] overlayfs: idmapped layers are currently not supported
	[Jan10 02:43] overlayfs: idmapped layers are currently not supported
	[Jan10 02:44] overlayfs: idmapped layers are currently not supported
	[Jan10 02:45] overlayfs: idmapped layers are currently not supported
	[Jan10 02:46] overlayfs: idmapped layers are currently not supported
	[Jan10 02:48] overlayfs: idmapped layers are currently not supported
	[Jan10 02:49] overlayfs: idmapped layers are currently not supported
	[  +4.690964] overlayfs: idmapped layers are currently not supported
	[ +26.361261] overlayfs: idmapped layers are currently not supported
	[Jan10 02:50] overlayfs: idmapped layers are currently not supported
	[ +20.145083] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0a4524e2475eb21f6dbb36bb2158a508c31075979547ca9c30367ced5eab40f6] <==
	{"level":"info","ts":"2026-01-10T02:50:27.233962Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T02:50:27.234041Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T02:50:27.263214Z","caller":"embed/etcd.go:766","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-10T02:50:27.263893Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-10T02:50:27.263907Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2026-01-10T02:50:27.264111Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T02:50:27.264135Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T02:50:27.658230Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T02:50:27.658300Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T02:50:27.658340Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T02:50:27.658351Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:50:27.658365Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T02:50:27.662390Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T02:50:27.662432Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T02:50:27.662450Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-10T02:50:27.662466Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T02:50:27.664051Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-403885 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T02:50:27.664089Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:50:27.664124Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T02:50:27.665108Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:50:27.666981Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T02:50:27.667619Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T02:50:27.705515Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-10T02:50:27.723901Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T02:50:27.723954Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 02:51:25 up  1:33,  0 user,  load average: 2.80, 2.67, 2.18
	Linux default-k8s-diff-port-403885 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bc847f22b7096dcdd6f43e2f7fd0bb0bec20221d544edea551adf64088c8d1f9] <==
	I0110 02:50:32.853398       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 02:50:32.924095       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0110 02:50:32.924300       1 main.go:148] setting mtu 1500 for CNI 
	I0110 02:50:32.924341       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 02:50:32.924384       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T02:50:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 02:50:33.037827       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 02:50:33.123844       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 02:50:33.123956       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 02:50:33.125579       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0110 02:51:03.041493       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0110 02:51:03.125140       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0110 02:51:03.125140       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0110 02:51:03.125234       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I0110 02:51:04.424453       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 02:51:04.424485       1 metrics.go:72] Registering metrics
	I0110 02:51:04.424536       1 controller.go:711] "Syncing nftables rules"
	I0110 02:51:13.043966       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:51:13.044622       1 main.go:301] handling current node
	I0110 02:51:23.039881       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 02:51:23.039919       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ccaef7514d5ac77efad7a1f91ebd93145b03e3462b1154eb3f723d4a112acdf6] <==
	I0110 02:50:31.370526       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0110 02:50:31.370601       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 02:50:31.370670       1 aggregator.go:187] initial CRD sync complete...
	I0110 02:50:31.370678       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 02:50:31.370684       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 02:50:31.370689       1 cache.go:39] Caches are synced for autoregister controller
	I0110 02:50:31.379256       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 02:50:31.379361       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:31.379378       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:31.382379       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 02:50:31.382399       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 02:50:31.386037       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 02:50:31.399310       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E0110 02:50:31.431496       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 02:50:31.912001       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 02:50:31.957172       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 02:50:32.048049       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 02:50:32.135645       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 02:50:32.166683       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 02:50:32.196204       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 02:50:32.413516       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.64.212"}
	I0110 02:50:32.477425       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.226.81"}
	I0110 02:50:34.934657       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 02:50:34.978953       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 02:50:35.032773       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3eef2c483d9e9a9a8a3ab6bf3e8cf944fedbac4b9d208f8d1d0c7a3086f100ac] <==
	I0110 02:50:34.421475       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.421554       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.421601       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.421656       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.421759       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 02:50:34.421869       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-403885"
	I0110 02:50:34.421962       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 02:50:34.422025       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.428128       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.432679       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.432774       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.433425       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.433505       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.433581       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.433757       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.434934       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.435044       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.448722       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.452266       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.458815       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.513245       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.513364       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 02:50:34.513411       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 02:50:34.521275       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:34.986333       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [b162bf1f9ee4d4f795e11b6f38b571c7fe566f4233b8a46e579adb8bdc2bdc39] <==
	I0110 02:50:32.949790       1 server_linux.go:53] "Using iptables proxy"
	I0110 02:50:33.025628       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:50:33.126837       1 shared_informer.go:377] "Caches are synced"
	I0110 02:50:33.126927       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0110 02:50:33.127100       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 02:50:33.147089       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 02:50:33.147151       1 server_linux.go:136] "Using iptables Proxier"
	I0110 02:50:33.151096       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 02:50:33.151450       1 server.go:529] "Version info" version="v1.35.0"
	I0110 02:50:33.151515       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:50:33.154566       1 config.go:106] "Starting endpoint slice config controller"
	I0110 02:50:33.154587       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 02:50:33.154848       1 config.go:200] "Starting service config controller"
	I0110 02:50:33.154864       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 02:50:33.155169       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 02:50:33.155226       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 02:50:33.168380       1 config.go:309] "Starting node config controller"
	I0110 02:50:33.168404       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 02:50:33.168412       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 02:50:33.255874       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 02:50:33.255887       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 02:50:33.255902       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [73f1ff616118310e5ea35e9ab5496ff345a48d4dbb2a0c33eba742bc17a20097] <==
	I0110 02:50:29.333855       1 serving.go:386] Generated self-signed cert in-memory
	W0110 02:50:31.096255       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 02:50:31.096390       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 02:50:31.096425       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 02:50:31.096479       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 02:50:31.289002       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 02:50:31.291863       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 02:50:31.294050       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 02:50:31.294067       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 02:50:31.294670       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 02:50:31.294952       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 02:50:31.394202       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 02:50:45 default-k8s-diff-port-403885 kubelet[797]: E0110 02:50:45.937099     797 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-403885" containerName="etcd"
	Jan 10 02:50:46 default-k8s-diff-port-403885 kubelet[797]: E0110 02:50:46.530086     797 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-403885" containerName="etcd"
	Jan 10 02:50:49 default-k8s-diff-port-403885 kubelet[797]: E0110 02:50:49.540105     797 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-l5llr" containerName="kubernetes-dashboard"
	Jan 10 02:50:50 default-k8s-diff-port-403885 kubelet[797]: E0110 02:50:50.542509     797 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-l5llr" containerName="kubernetes-dashboard"
	Jan 10 02:50:53 default-k8s-diff-port-403885 kubelet[797]: E0110 02:50:53.359676     797 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh" containerName="dashboard-metrics-scraper"
	Jan 10 02:50:53 default-k8s-diff-port-403885 kubelet[797]: I0110 02:50:53.360213     797 scope.go:122] "RemoveContainer" containerID="428237fb0141d71c924e94787d4ef3230f609869b503542b047b35082b621ef2"
	Jan 10 02:50:53 default-k8s-diff-port-403885 kubelet[797]: I0110 02:50:53.551766     797 scope.go:122] "RemoveContainer" containerID="428237fb0141d71c924e94787d4ef3230f609869b503542b047b35082b621ef2"
	Jan 10 02:50:53 default-k8s-diff-port-403885 kubelet[797]: E0110 02:50:53.552510     797 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh" containerName="dashboard-metrics-scraper"
	Jan 10 02:50:53 default-k8s-diff-port-403885 kubelet[797]: I0110 02:50:53.552537     797 scope.go:122] "RemoveContainer" containerID="a8bfb9653382e887ee3cf9c6de048fbe30fabcad1d3c5e5fc7a07a62a94a0f5c"
	Jan 10 02:50:53 default-k8s-diff-port-403885 kubelet[797]: E0110 02:50:53.552784     797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ngnzh_kubernetes-dashboard(ba3001f4-1a32-4808-bd59-6d66fe5a0867)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh" podUID="ba3001f4-1a32-4808-bd59-6d66fe5a0867"
	Jan 10 02:50:53 default-k8s-diff-port-403885 kubelet[797]: I0110 02:50:53.569244     797 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-l5llr" podStartSLOduration=4.94752396 podStartE2EDuration="18.568455136s" podCreationTimestamp="2026-01-10 02:50:35 +0000 UTC" firstStartedPulling="2026-01-10 02:50:35.841719964 +0000 UTC m=+10.794840923" lastFinishedPulling="2026-01-10 02:50:49.462651139 +0000 UTC m=+24.415772099" observedRunningTime="2026-01-10 02:50:49.558215848 +0000 UTC m=+24.511336816" watchObservedRunningTime="2026-01-10 02:50:53.568455136 +0000 UTC m=+28.521576096"
	Jan 10 02:50:55 default-k8s-diff-port-403885 kubelet[797]: E0110 02:50:55.769826     797 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh" containerName="dashboard-metrics-scraper"
	Jan 10 02:50:55 default-k8s-diff-port-403885 kubelet[797]: I0110 02:50:55.769875     797 scope.go:122] "RemoveContainer" containerID="a8bfb9653382e887ee3cf9c6de048fbe30fabcad1d3c5e5fc7a07a62a94a0f5c"
	Jan 10 02:50:55 default-k8s-diff-port-403885 kubelet[797]: E0110 02:50:55.770033     797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ngnzh_kubernetes-dashboard(ba3001f4-1a32-4808-bd59-6d66fe5a0867)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh" podUID="ba3001f4-1a32-4808-bd59-6d66fe5a0867"
	Jan 10 02:51:03 default-k8s-diff-port-403885 kubelet[797]: I0110 02:51:03.577016     797 scope.go:122] "RemoveContainer" containerID="f3a4dab3b349942b6b00dff9e9ac2e0626b5c7d8890cd8aa838981f79a2d1c23"
	Jan 10 02:51:05 default-k8s-diff-port-403885 kubelet[797]: E0110 02:51:05.432211     797 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-sck2c" containerName="coredns"
	Jan 10 02:51:16 default-k8s-diff-port-403885 kubelet[797]: E0110 02:51:16.359963     797 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh" containerName="dashboard-metrics-scraper"
	Jan 10 02:51:16 default-k8s-diff-port-403885 kubelet[797]: I0110 02:51:16.360506     797 scope.go:122] "RemoveContainer" containerID="a8bfb9653382e887ee3cf9c6de048fbe30fabcad1d3c5e5fc7a07a62a94a0f5c"
	Jan 10 02:51:16 default-k8s-diff-port-403885 kubelet[797]: I0110 02:51:16.616111     797 scope.go:122] "RemoveContainer" containerID="a8bfb9653382e887ee3cf9c6de048fbe30fabcad1d3c5e5fc7a07a62a94a0f5c"
	Jan 10 02:51:16 default-k8s-diff-port-403885 kubelet[797]: E0110 02:51:16.616689     797 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh" containerName="dashboard-metrics-scraper"
	Jan 10 02:51:16 default-k8s-diff-port-403885 kubelet[797]: I0110 02:51:16.616718     797 scope.go:122] "RemoveContainer" containerID="cea7cb71f60a3bf1cc0f92af289dd1a12432224c4eec12b146d182a413e0d79a"
	Jan 10 02:51:16 default-k8s-diff-port-403885 kubelet[797]: E0110 02:51:16.617212     797 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-ngnzh_kubernetes-dashboard(ba3001f4-1a32-4808-bd59-6d66fe5a0867)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-ngnzh" podUID="ba3001f4-1a32-4808-bd59-6d66fe5a0867"
	Jan 10 02:51:20 default-k8s-diff-port-403885 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 02:51:20 default-k8s-diff-port-403885 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 02:51:20 default-k8s-diff-port-403885 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c189fb7bad01fb39bc337c1bc6565e786ee9e056431d9067b94b07b97b2427cc] <==
	2026/01/10 02:50:49 Using namespace: kubernetes-dashboard
	2026/01/10 02:50:49 Using in-cluster config to connect to apiserver
	2026/01/10 02:50:49 Using secret token for csrf signing
	2026/01/10 02:50:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 02:50:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 02:50:49 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 02:50:49 Generating JWE encryption key
	2026/01/10 02:50:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 02:50:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 02:50:50 Initializing JWE encryption key from synchronized object
	2026/01/10 02:50:50 Creating in-cluster Sidecar client
	2026/01/10 02:50:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:50:50 Serving insecurely on HTTP port: 9090
	2026/01/10 02:51:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 02:50:49 Starting overwatch
	
	
	==> storage-provisioner [910d0dab6a77a8bbc264e0daefb03ba43c5628340ff9b5c13ebb7ce7186fb91b] <==
	I0110 02:51:03.632713       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 02:51:03.647339       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 02:51:03.647458       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 02:51:03.651108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:51:07.106527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:51:11.366929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:51:14.965034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:51:18.018688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:51:21.040715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:51:21.048408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:51:21.048615       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 02:51:21.048785       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-403885_ed02fb7d-452a-42e6-bd52-572a9babe278!
	I0110 02:51:21.049427       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa1af1bb-334c-4465-9421-7ffe1f5fe2f3", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-403885_ed02fb7d-452a-42e6-bd52-572a9babe278 became leader
	W0110 02:51:21.058015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:51:21.061379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 02:51:21.149220       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-403885_ed02fb7d-452a-42e6-bd52-572a9babe278!
	W0110 02:51:23.065042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:51:23.071965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:51:25.076181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 02:51:25.081524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f3a4dab3b349942b6b00dff9e9ac2e0626b5c7d8890cd8aa838981f79a2d1c23] <==
	I0110 02:50:32.924734       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 02:51:02.926688       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-403885 -n default-k8s-diff-port-403885
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-403885 -n default-k8s-diff-port-403885: exit status 2 (365.246452ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-403885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.63s)
E0110 02:56:13.175789    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:56:16.120899    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:56:16.126253    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:56:16.136557    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:56:16.157317    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:56:16.197690    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:56:16.278138    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:56:16.438551    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:56:16.758924    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:56:17.399923    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:56:18.680807    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:56:21.241597    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:56:26.362518    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (274/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.86
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.35.0/json-events 4.02
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.09
18 TestDownloadOnly/v1.35.0/DeleteAll 0.21
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 129.73
31 TestAddons/serial/GCPAuth/Namespaces 0.23
32 TestAddons/serial/GCPAuth/FakeCredentials 8.92
48 TestAddons/StoppedEnableDisable 12.41
49 TestCertOptions 30.44
50 TestCertExpiration 224.51
58 TestErrorSpam/setup 26.95
59 TestErrorSpam/start 0.79
60 TestErrorSpam/status 1.14
61 TestErrorSpam/pause 6.19
62 TestErrorSpam/unpause 5.38
63 TestErrorSpam/stop 1.49
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 47.7
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.11
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.95
75 TestFunctional/serial/CacheCmd/cache/add_local 1.21
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.88
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 30.63
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.43
86 TestFunctional/serial/LogsFileCmd 1.48
87 TestFunctional/serial/InvalidService 3.89
89 TestFunctional/parallel/ConfigCmd 0.44
90 TestFunctional/parallel/DashboardCmd 18.55
91 TestFunctional/parallel/DryRun 0.49
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.22
97 TestFunctional/parallel/ServiceCmdConnect 7.59
98 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/PersistentVolumeClaim 20.61
101 TestFunctional/parallel/SSHCmd 0.73
102 TestFunctional/parallel/CpCmd 2.02
104 TestFunctional/parallel/FileSync 0.34
105 TestFunctional/parallel/CertSync 1.85
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
113 TestFunctional/parallel/License 0.32
114 TestFunctional/parallel/Version/short 0.08
115 TestFunctional/parallel/Version/components 0.59
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.77
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.14
121 TestFunctional/parallel/ImageCommands/Setup 1.98
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.25
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.54
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.19
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.47
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.56
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.42
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.7
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.42
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/ServiceCmd/DeployApp 6.26
144 TestFunctional/parallel/ServiceCmd/List 0.57
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
147 TestFunctional/parallel/ServiceCmd/Format 0.37
148 TestFunctional/parallel/ServiceCmd/URL 0.48
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.64
150 TestFunctional/parallel/MountCmd/any-port 7.76
151 TestFunctional/parallel/ProfileCmd/profile_list 0.57
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
153 TestFunctional/parallel/MountCmd/specific-port 1.5
154 TestFunctional/parallel/MountCmd/VerifyCleanup 2.84
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 160.63
163 TestMultiControlPlane/serial/DeployApp 7.15
164 TestMultiControlPlane/serial/PingHostFromPods 1.36
165 TestMultiControlPlane/serial/AddWorkerNode 29.93
166 TestMultiControlPlane/serial/NodeLabels 0.13
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.02
168 TestMultiControlPlane/serial/CopyFile 19.8
169 TestMultiControlPlane/serial/StopSecondaryNode 12.91
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.87
171 TestMultiControlPlane/serial/RestartSecondaryNode 22.39
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.21
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 110.09
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.98
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.79
176 TestMultiControlPlane/serial/StopCluster 36
177 TestMultiControlPlane/serial/RestartCluster 78.55
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.77
179 TestMultiControlPlane/serial/AddSecondaryNode 52.5
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.03
185 TestJSONOutput/start/Command 45.76
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.85
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 34.47
211 TestKicCustomNetwork/use_default_bridge_network 30.37
212 TestKicExistingNetwork 29.87
213 TestKicCustomSubnet 30.15
214 TestKicStaticIP 30.19
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 59.98
219 TestMountStart/serial/StartWithMountFirst 9.74
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 8.66
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.7
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.28
226 TestMountStart/serial/RestartStopped 7.98
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 73.38
231 TestMultiNode/serial/DeployApp2Nodes 5.11
232 TestMultiNode/serial/PingHostFrom2Pods 0.86
233 TestMultiNode/serial/AddNode 29.17
234 TestMultiNode/serial/MultiNodeLabels 0.08
235 TestMultiNode/serial/ProfileList 0.69
236 TestMultiNode/serial/CopyFile 10.14
237 TestMultiNode/serial/StopNode 2.38
238 TestMultiNode/serial/StartAfterStop 8.54
239 TestMultiNode/serial/RestartKeepsNodes 81.08
240 TestMultiNode/serial/DeleteNode 5.43
241 TestMultiNode/serial/StopMultiNode 23.99
242 TestMultiNode/serial/RestartMultiNode 49.01
243 TestMultiNode/serial/ValidateNameConflict 31.67
250 TestScheduledStopUnix 104.24
253 TestInsufficientStorage 13.12
254 TestRunningBinaryUpgrade 311.59
256 TestKubernetesUpgrade 349.78
257 TestMissingContainerUpgrade 115.76
259 TestPause/serial/Start 56.94
260 TestPause/serial/SecondStartNoReconfiguration 28.13
262 TestStoppedBinaryUpgrade/Setup 0.83
263 TestStoppedBinaryUpgrade/Upgrade 324.69
264 TestStoppedBinaryUpgrade/MinikubeLogs 2.23
272 TestPreload/Start-NoPreload-PullImage 71.21
273 TestPreload/Restart-With-Preload-Check-User-Image 52.64
276 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
277 TestNoKubernetes/serial/StartWithK8s 28.23
278 TestNoKubernetes/serial/StartWithStopK8s 16.06
279 TestNoKubernetes/serial/Start 8.21
280 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
282 TestNoKubernetes/serial/ProfileList 1.03
283 TestNoKubernetes/serial/Stop 1.31
284 TestNoKubernetes/serial/StartNoArgs 7.06
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
293 TestNetworkPlugins/group/false 3.6
298 TestStartStop/group/old-k8s-version/serial/FirstStart 63.63
299 TestStartStop/group/old-k8s-version/serial/DeployApp 9.4
301 TestStartStop/group/old-k8s-version/serial/Stop 12.01
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
303 TestStartStop/group/old-k8s-version/serial/SecondStart 55.68
304 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
305 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
306 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
309 TestStartStop/group/embed-certs/serial/FirstStart 48.41
310 TestStartStop/group/embed-certs/serial/DeployApp 9.31
312 TestStartStop/group/embed-certs/serial/Stop 11.99
313 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
314 TestStartStop/group/embed-certs/serial/SecondStart 55
315 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
316 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
317 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
320 TestStartStop/group/no-preload/serial/FirstStart 53.55
321 TestStartStop/group/no-preload/serial/DeployApp 10.3
323 TestStartStop/group/no-preload/serial/Stop 12.07
324 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
325 TestStartStop/group/no-preload/serial/SecondStart 47.78
326 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
327 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
328 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
331 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 48.67
333 TestStartStop/group/newest-cni/serial/FirstStart 30.79
334 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/Stop 1.44
337 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
338 TestStartStop/group/newest-cni/serial/SecondStart 13.9
339 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.55
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
345 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.04
346 TestPreload/PreloadSrc/gcs 6.57
347 TestPreload/PreloadSrc/github 8.16
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
349 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.69
350 TestPreload/PreloadSrc/gcs-cached 0.62
351 TestNetworkPlugins/group/auto/Start 52.24
352 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
353 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
354 TestNetworkPlugins/group/auto/KubeletFlags 0.3
355 TestNetworkPlugins/group/auto/NetCatPod 9.31
356 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
358 TestNetworkPlugins/group/auto/DNS 0.21
359 TestNetworkPlugins/group/auto/Localhost 0.16
360 TestNetworkPlugins/group/auto/HairPin 0.16
361 TestNetworkPlugins/group/kindnet/Start 53.34
362 TestNetworkPlugins/group/calico/Start 57.36
363 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
364 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
365 TestNetworkPlugins/group/kindnet/NetCatPod 11.34
366 TestNetworkPlugins/group/kindnet/DNS 0.25
367 TestNetworkPlugins/group/kindnet/Localhost 0.18
368 TestNetworkPlugins/group/kindnet/HairPin 0.16
369 TestNetworkPlugins/group/calico/ControllerPod 6.01
370 TestNetworkPlugins/group/calico/KubeletFlags 0.37
371 TestNetworkPlugins/group/calico/NetCatPod 13.44
372 TestNetworkPlugins/group/custom-flannel/Start 59.13
373 TestNetworkPlugins/group/calico/DNS 0.28
374 TestNetworkPlugins/group/calico/Localhost 0.14
375 TestNetworkPlugins/group/calico/HairPin 0.15
376 TestNetworkPlugins/group/enable-default-cni/Start 69.05
377 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
378 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.41
379 TestNetworkPlugins/group/custom-flannel/DNS 0.16
380 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
381 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
382 TestNetworkPlugins/group/flannel/Start 51.97
383 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
384 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.42
385 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
386 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
387 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
388 TestNetworkPlugins/group/bridge/Start 74.26
389 TestNetworkPlugins/group/flannel/ControllerPod 6.01
390 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
391 TestNetworkPlugins/group/flannel/NetCatPod 12.27
392 TestNetworkPlugins/group/flannel/DNS 0.18
393 TestNetworkPlugins/group/flannel/Localhost 0.19
394 TestNetworkPlugins/group/flannel/HairPin 0.4
395 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
396 TestNetworkPlugins/group/bridge/NetCatPod 10.27
397 TestNetworkPlugins/group/bridge/DNS 0.15
398 TestNetworkPlugins/group/bridge/Localhost 0.13
399 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (7.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-718348 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-718348 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.861285589s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0110 01:53:44.399457    4168 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0110 01:53:44.399558    4168 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-718348
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-718348: exit status 85 (84.979437ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-718348 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-718348 │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 01:53:36
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 01:53:36.576550    4174 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:53:36.576747    4174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:53:36.576771    4174 out.go:374] Setting ErrFile to fd 2...
	I0110 01:53:36.576790    4174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:53:36.577054    4174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	W0110 01:53:36.577207    4174 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22414-2353/.minikube/config/config.json: open /home/jenkins/minikube-integration/22414-2353/.minikube/config/config.json: no such file or directory
	I0110 01:53:36.577665    4174 out.go:368] Setting JSON to true
	I0110 01:53:36.578439    4174 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2166,"bootTime":1768007851,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 01:53:36.578536    4174 start.go:143] virtualization:  
	I0110 01:53:36.584427    4174 out.go:99] [download-only-718348] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W0110 01:53:36.584609    4174 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball: no such file or directory
	I0110 01:53:36.584723    4174 notify.go:221] Checking for updates...
	I0110 01:53:36.588640    4174 out.go:171] MINIKUBE_LOCATION=22414
	I0110 01:53:36.591961    4174 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 01:53:36.595519    4174 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 01:53:36.598648    4174 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 01:53:36.601879    4174 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0110 01:53:36.608153    4174 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0110 01:53:36.608434    4174 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 01:53:36.633335    4174 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 01:53:36.633430    4174 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 01:53:37.049305    4174 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2026-01-10 01:53:37.035929309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 01:53:37.049412    4174 docker.go:319] overlay module found
	I0110 01:53:37.052589    4174 out.go:99] Using the docker driver based on user configuration
	I0110 01:53:37.052636    4174 start.go:309] selected driver: docker
	I0110 01:53:37.052643    4174 start.go:928] validating driver "docker" against <nil>
	I0110 01:53:37.052757    4174 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 01:53:37.119708    4174 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:61 SystemTime:2026-01-10 01:53:37.110085442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 01:53:37.119913    4174 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 01:53:37.120215    4174 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0110 01:53:37.120384    4174 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 01:53:37.123814    4174 out.go:171] Using Docker driver with root privileges
	I0110 01:53:37.126882    4174 cni.go:84] Creating CNI manager for ""
	I0110 01:53:37.126950    4174 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 01:53:37.126963    4174 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 01:53:37.127051    4174 start.go:353] cluster config:
	{Name:download-only-718348 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-718348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 01:53:37.130083    4174 out.go:99] Starting "download-only-718348" primary control-plane node in "download-only-718348" cluster
	I0110 01:53:37.130115    4174 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 01:53:37.133183    4174 out.go:99] Pulling base image v0.0.48-1767944074-22401 ...
	I0110 01:53:37.133263    4174 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 01:53:37.133304    4174 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 01:53:37.149169    4174 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 to local cache
	I0110 01:53:37.149348    4174 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local cache directory
	I0110 01:53:37.149459    4174 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 to local cache
	I0110 01:53:37.184088    4174 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0110 01:53:37.184123    4174 cache.go:65] Caching tarball of preloaded images
	I0110 01:53:37.184285    4174 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 01:53:37.187685    4174 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0110 01:53:37.187727    4174 preload.go:269] Downloading preload from https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0110 01:53:37.187734    4174 preload.go:336] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I0110 01:53:37.265873    4174 preload.go:313] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I0110 01:53:37.266008    4174 download.go:114] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0110 01:53:40.515103    4174 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0110 01:53:40.515545    4174 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/download-only-718348/config.json ...
	I0110 01:53:40.515608    4174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/download-only-718348/config.json: {Name:mk5dc566a5d0bcd552d7f17350429dd2589fb492 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 01:53:40.515883    4174 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 01:53:40.516131    4174 download.go:114] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22414-2353/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-718348 host does not exist
	  To start a cluster, run: "minikube start -p download-only-718348"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-718348
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (4.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-521519 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-521519 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.018437026s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (4.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I0110 01:53:48.857276    4168 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
I0110 01:53:48.857311    4168 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-521519
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-521519: exit status 85 (86.244667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-718348 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-718348 │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ delete  │ -p download-only-718348                                                                                                                                                   │ download-only-718348 │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │ 10 Jan 26 01:53 UTC │
	│ start   │ -o=json --download-only -p download-only-521519 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-521519 │ jenkins │ v1.37.0 │ 10 Jan 26 01:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 01:53:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 01:53:44.880121    4372 out.go:360] Setting OutFile to fd 1 ...
	I0110 01:53:44.880609    4372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:53:44.880640    4372 out.go:374] Setting ErrFile to fd 2...
	I0110 01:53:44.880659    4372 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 01:53:44.881081    4372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 01:53:44.881649    4372 out.go:368] Setting JSON to true
	I0110 01:53:44.882413    4372 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2174,"bootTime":1768007851,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 01:53:44.882580    4372 start.go:143] virtualization:  
	I0110 01:53:44.886064    4372 out.go:99] [download-only-521519] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 01:53:44.886253    4372 notify.go:221] Checking for updates...
	I0110 01:53:44.889415    4372 out.go:171] MINIKUBE_LOCATION=22414
	I0110 01:53:44.892501    4372 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 01:53:44.895494    4372 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 01:53:44.898386    4372 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 01:53:44.901335    4372 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0110 01:53:44.907128    4372 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0110 01:53:44.907420    4372 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 01:53:44.940008    4372 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 01:53:44.940123    4372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 01:53:45.000451    4372 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:48 SystemTime:2026-01-10 01:53:44.990597427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 01:53:45.000554    4372 docker.go:319] overlay module found
	I0110 01:53:45.003598    4372 out.go:99] Using the docker driver based on user configuration
	I0110 01:53:45.003639    4372 start.go:309] selected driver: docker
	I0110 01:53:45.003647    4372 start.go:928] validating driver "docker" against <nil>
	I0110 01:53:45.003766    4372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 01:53:45.128751    4372 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:48 SystemTime:2026-01-10 01:53:45.115940628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 01:53:45.128922    4372 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 01:53:45.129210    4372 start_flags.go:417] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0110 01:53:45.129363    4372 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 01:53:45.154118    4372 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-521519 host does not exist
	  To start a cluster, run: "minikube start -p download-only-521519"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-521519
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0110 01:53:50.004340    4168 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-933958 --alsologtostderr --binary-mirror http://127.0.0.1:44897 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-933958" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-933958
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-106930
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-106930: exit status 85 (67.431967ms)

                                                
                                                
-- stdout --
	* Profile "addons-106930" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-106930"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-106930
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-106930: exit status 85 (63.470465ms)

                                                
                                                
-- stdout --
	* Profile "addons-106930" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-106930"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (129.73s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-106930 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-106930 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m9.726363419s)
--- PASS: TestAddons/Setup (129.73s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-106930 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-106930 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.92s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-106930 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-106930 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [2171cdb5-7484-4446-ae00-c6cb9bff5211] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [2171cdb5-7484-4446-ae00-c6cb9bff5211] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.00383498s
addons_test.go:696: (dbg) Run:  kubectl --context addons-106930 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-106930 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-106930 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-106930 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.92s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-106930
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-106930: (12.13676132s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-106930
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-106930
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-106930
--- PASS: TestAddons/StoppedEnableDisable (12.41s)

                                                
                                    
x
+
TestCertOptions (30.44s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-295914 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0110 02:41:01.891928    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-295914 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (27.684161878s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-295914 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-295914 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-295914 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-295914" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-295914
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-295914: (2.04035483s)
--- PASS: TestCertOptions (30.44s)

                                                
                                    
x
+
TestCertExpiration (224.51s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-213257 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-213257 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.108883368s)
E0110 02:38:21.392447    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-213257 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E0110 02:40:18.347859    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-213257 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.970180347s)
helpers_test.go:176: Cleaning up "cert-expiration-213257" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-213257
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-213257: (2.429442102s)
--- PASS: TestCertExpiration (224.51s)

                                                
                                    
x
+
TestErrorSpam/setup (26.95s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-509743 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-509743 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-509743 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-509743 --driver=docker  --container-runtime=crio: (26.953078559s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.35.0."
--- PASS: TestErrorSpam/setup (26.95s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (6.19s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 pause: exit status 80 (2.402391715s)

                                                
                                                
-- stdout --
	* Pausing node nospam-509743 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:57:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 pause: exit status 80 (1.540280522s)

                                                
                                                
-- stdout --
	* Pausing node nospam-509743 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:57:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 pause: exit status 80 (2.244368214s)

                                                
                                                
-- stdout --
	* Pausing node nospam-509743 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:57:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.19s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.38s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 unpause: exit status 80 (2.014598399s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-509743 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:58:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 unpause: exit status 80 (1.852973702s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-509743 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:58:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 unpause: exit status 80 (1.512438865s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-509743 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T01:58:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.38s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 stop: (1.29550014s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-509743 --log_dir /tmp/nospam-509743 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22414-2353/.minikube/files/etc/test/nested/copy/4168/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.7s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-arm64 start -p functional-866562 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2244: (dbg) Done: out/minikube-linux-arm64 start -p functional-866562 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (47.702371481s)
--- PASS: TestFunctional/serial/StartWithProxy (47.70s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.11s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0110 01:58:58.043616    4168 config.go:182] Loaded profile config "functional-866562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-866562 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-866562 --alsologtostderr -v=8: (29.112106448s)
functional_test.go:678: soft start took 29.112604543s for "functional-866562" cluster.
I0110 01:59:27.156008    4168 config.go:182] Loaded profile config "functional-866562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (29.11s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-866562 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-866562 cache add registry.k8s.io/pause:3.1: (1.317774579s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-866562 cache add registry.k8s.io/pause:3.3: (1.33247487s)
functional_test.go:1069: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 cache add registry.k8s.io/pause:latest
functional_test.go:1069: (dbg) Done: out/minikube-linux-arm64 -p functional-866562 cache add registry.k8s.io/pause:latest: (1.301288735s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-866562 /tmp/TestFunctionalserialCacheCmdcacheadd_local3444865348/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 cache add minikube-local-cache-test:functional-866562
functional_test.go:1114: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 cache delete minikube-local-cache-test:functional-866562
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-866562
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-866562 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (281.102169ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 kubectl -- --context functional-866562 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-866562 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (30.63s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-866562 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-866562 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (30.632047381s)
functional_test.go:776: restart took 30.632140236s for "functional-866562" cluster.
I0110 02:00:05.761796    4168 config.go:182] Loaded profile config "functional-866562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (30.63s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-866562 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-arm64 -p functional-866562 logs: (1.430093886s)
--- PASS: TestFunctional/serial/LogsCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 logs --file /tmp/TestFunctionalserialLogsFileCmd82495197/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-arm64 -p functional-866562 logs --file /tmp/TestFunctionalserialLogsFileCmd82495197/001/logs.txt: (1.481851307s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.89s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-866562 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-866562
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-866562: exit status 115 (368.807868ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30521 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-866562 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-866562 config get cpus: exit status 14 (66.395105ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-866562 config get cpus: exit status 14 (62.476908ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-866562 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 0 -p functional-866562 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 29313: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.55s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-arm64 start -p functional-866562 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-866562 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (205.643235ms)

                                                
                                                
-- stdout --
	* [functional-866562] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:00:47.255890   29045 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:00:47.256048   29045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:00:47.256057   29045 out.go:374] Setting ErrFile to fd 2...
	I0110 02:00:47.256063   29045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:00:47.256309   29045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:00:47.256664   29045 out.go:368] Setting JSON to false
	I0110 02:00:47.257494   29045 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2597,"bootTime":1768007851,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:00:47.257570   29045 start.go:143] virtualization:  
	I0110 02:00:47.260632   29045 out.go:179] * [functional-866562] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:00:47.264290   29045 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:00:47.264385   29045 notify.go:221] Checking for updates...
	I0110 02:00:47.269915   29045 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:00:47.272832   29045 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:00:47.275690   29045 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:00:47.278529   29045 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:00:47.281363   29045 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:00:47.284712   29045 config.go:182] Loaded profile config "functional-866562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:00:47.285269   29045 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:00:47.310646   29045 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:00:47.310746   29045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:00:47.389387   29045 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-10 02:00:47.379502501 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:00:47.390196   29045 docker.go:319] overlay module found
	I0110 02:00:47.395498   29045 out.go:179] * Using the docker driver based on existing profile
	I0110 02:00:47.398861   29045 start.go:309] selected driver: docker
	I0110 02:00:47.398881   29045 start.go:928] validating driver "docker" against &{Name:functional-866562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-866562 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:00:47.398984   29045 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:00:47.403209   29045 out.go:203] 
	W0110 02:00:47.406981   29045 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0110 02:00:47.410108   29045 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 start -p functional-866562 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 start -p functional-866562 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-866562 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (211.756165ms)

                                                
                                                
-- stdout --
	* [functional-866562] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:00:47.053358   28973 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:00:47.053454   28973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:00:47.053459   28973 out.go:374] Setting ErrFile to fd 2...
	I0110 02:00:47.053464   28973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:00:47.053839   28973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:00:47.054203   28973 out.go:368] Setting JSON to false
	I0110 02:00:47.054987   28973 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2596,"bootTime":1768007851,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:00:47.055047   28973 start.go:143] virtualization:  
	I0110 02:00:47.058896   28973 out.go:179] * [functional-866562] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I0110 02:00:47.062932   28973 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:00:47.063015   28973 notify.go:221] Checking for updates...
	I0110 02:00:47.068965   28973 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:00:47.072016   28973 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:00:47.074924   28973 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:00:47.077836   28973 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:00:47.080878   28973 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:00:47.084420   28973 config.go:182] Loaded profile config "functional-866562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:00:47.084993   28973 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:00:47.109865   28973 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:00:47.109977   28973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:00:47.187260   28973 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2026-01-10 02:00:47.175833662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:00:47.187367   28973 docker.go:319] overlay module found
	I0110 02:00:47.191520   28973 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0110 02:00:47.194504   28973 start.go:309] selected driver: docker
	I0110 02:00:47.194527   28973 start.go:928] validating driver "docker" against &{Name:functional-866562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-866562 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 02:00:47.194638   28973 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:00:47.198214   28973 out.go:203] 
	W0110 02:00:47.201142   28973 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0110 02:00:47.204093   28973 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-866562 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-866562 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-vt6lt" [e9d21cbd-726f-46cb-a901-b7eeaf8a1c97] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-vt6lt" [e9d21cbd-726f-46cb-a901-b7eeaf8a1c97] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003883773s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:32255
functional_test.go:1685: http://192.168.49.2:32255: success! body:
Request served by hello-node-connect-5d95464fd4-vt6lt

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32255
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [32fff899-29b0-4027-b7da-d0b963825278] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004133235s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-866562 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-866562 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-866562 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-866562 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [c8d3af9f-11e3-4d76-b2ee-16bd654a5439] Pending
helpers_test.go:353: "sp-pod" [c8d3af9f-11e3-4d76-b2ee-16bd654a5439] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003743768s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-866562 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-866562 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-866562 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [767fc7bf-04ee-49a4-9b19-23ac12ce25e4] Pending
helpers_test.go:353: "sp-pod" [767fc7bf-04ee-49a4-9b19-23ac12ce25e4] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003560736s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-866562 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.61s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh -n functional-866562 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 cp functional-866562:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3144683937/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh -n functional-866562 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh -n functional-866562 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/4168/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "sudo cat /etc/test/nested/copy/4168/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/4168.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "sudo cat /etc/ssl/certs/4168.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/4168.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "sudo cat /usr/share/ca-certificates/4168.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/41682.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "sudo cat /etc/ssl/certs/41682.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/41682.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "sudo cat /usr/share/ca-certificates/41682.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-866562 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-866562 ssh "sudo systemctl is-active docker": exit status 1 (374.937192ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "sudo systemctl is-active containerd"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-866562 ssh "sudo systemctl is-active containerd": exit status 1 (347.818189ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-866562 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-866562
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866562
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-866562 image ls --format short --alsologtostderr:
I0110 02:00:57.853050   30662 out.go:360] Setting OutFile to fd 1 ...
I0110 02:00:57.853146   30662 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:00:57.853187   30662 out.go:374] Setting ErrFile to fd 2...
I0110 02:00:57.853192   30662 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:00:57.853438   30662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
I0110 02:00:57.854044   30662 config.go:182] Loaded profile config "functional-866562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 02:00:57.854154   30662 config.go:182] Loaded profile config "functional-866562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 02:00:57.854654   30662 cli_runner.go:164] Run: docker container inspect functional-866562 --format={{.State.Status}}
I0110 02:00:57.873575   30662 ssh_runner.go:195] Run: systemctl --version
I0110 02:00:57.873622   30662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-866562
I0110 02:00:57.891299   30662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/functional-866562/id_rsa Username:docker}
I0110 02:00:58.002275   30662 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-866562 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ ddc8422d4d35a │ 49.8MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                             │ 3.10.1                                │ d7b100cd9a77b │ 520kB  │
│ gcr.io/k8s-minikube/busybox                       │ latest                                │ 71a676dd070f4 │ 1.63MB │
│ localhost/my-image                                │ functional-866562                     │ 82406f5b8f4eb │ 1.64MB │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ 88898f1d1a62a │ 72.2MB │
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ de369f46c2ff5 │ 74.1MB │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ c96ee3c174987 │ 108MB  │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ 1611cd07b61d5 │ 3.77MB │
│ localhost/minikube-local-cache-test               │ functional-866562                     │ 54cffed24179e │ 3.33kB │
│ registry.k8s.io/pause                             │ latest                                │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ 271e49a0ebc56 │ 60.9MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ ba04bb24b9575 │ 29MB   │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-866562                     │ ce2d2cda2d858 │ 4.79MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest                                │ ce2d2cda2d858 │ 4.79MB │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ 611c6647fcbbc │ 62.6MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ c3fcf259c473a │ 85MB   │
│ registry.k8s.io/pause                             │ 3.3                                   │ 3d18732f8686c │ 487kB  │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ b1a8c6f707935 │ 111MB  │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-866562 image ls --format table --alsologtostderr:
I0110 02:01:03.253669   31169 out.go:360] Setting OutFile to fd 1 ...
I0110 02:01:03.253832   31169 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:01:03.253862   31169 out.go:374] Setting ErrFile to fd 2...
I0110 02:01:03.253920   31169 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:01:03.254295   31169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
I0110 02:01:03.255263   31169 config.go:182] Loaded profile config "functional-866562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 02:01:03.255471   31169 config.go:182] Loaded profile config "functional-866562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 02:01:03.256272   31169 cli_runner.go:164] Run: docker container inspect functional-866562 --format={{.State.Status}}
I0110 02:01:03.274689   31169 ssh_runner.go:195] Run: systemctl --version
I0110 02:01:03.274741   31169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-866562
I0110 02:01:03.292770   31169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/functional-866562/id_rsa Username:docker}
I0110 02:01:03.399779   31169 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
E0110 02:01:04.451070    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
2026/01/10 02:01:06 [DEBUG] GET http://127.0.0.1:39843/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 image ls --format json --alsologtostderr
E0110 02:01:03.170838    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-866562 image ls --format json --alsologtostderr:
[{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"42263767"},{"id":"56734e6a6fb312d884286c7defc66e5618b7b2ea033b3ddedf6fb19f8e4366d4","repoDigests":["docker.io/library/2ba2184ffdc1336ca8df2b1640d45b7d81b04b24859755685d2cba29bccbc61e-tmp@sha256:b1a840728e237a14f6d1f0fd1591874dea2762b8edb75732b4c6eea74caa6815"],"repoTags":[],"size":"1638179"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b912867220
9474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3","registry.k8s.io/kube-apiserver@sha256:bd1ea721ef1552db1884b5e8753c61667620556e5e0bfe6be8b32b6a77d7a16d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"85015535"},{"id":"ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f","registry.k8s.io/kube-scheduler@sha256:36fe4e2d4335ff20aa335e673e7490151d57ffa753ef9282b8786930e6014ee3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"49822549"},{"id":"8057e0500773a37cde2cff041eb13ebd68c74
8419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"611c6647fcbbcffad724d5a5a85385d496c6b2a9c397459cb0c8316c40af5371","repoDigests":["public.ecr.aws/nginx/nginx@sha256:be49159753b31dc6d536fca5b044033e1e3e836667959ac238471b2ce50b31b0","public.ecr.aws/nginx/nginx@sha256:a6fbdb4b73007c40f67bfc798a2045503b634f9c53e8309396e5aaf38c418ac0"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"62642350"},{"id":"271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57","repoDigests":["registry.k8s.io/
etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890","registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"60850387"},{"id":"de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5","repoDigests":["registry.k8s.io/kube-proxy@sha256:817c21201edf58f5fe5be560c11178a250f7ba08a010a4cb73efcb0d98b467a5","registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"74106775"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13","repoDigests
":["docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3","docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"108362109"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e","gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","ghcr.io/medyag
h/image-mirrors/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866562","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4788229"},{"id":"54cffed24179e3c80fc660546760e80906f9c57ecf6838b1b7b533ea046cad54","repoDigests":["localhost/minikube-local-cache-test@sha256:0beed1c2a396093c7011db1b63554cfd7d1eb49a37ddf5ba8c2583a6de29ed62"],"repoTags":["localhost/minikube-local-cache-test:functional-866562"],"size":"3330"},{"id":"82406f5b8f4eb3c57c1a79e7f3cf0399f3b5ee29b220d9465549278f67429346","repoDigests":["localhost/my-image@sha256:8dbb5e83d902b8af73a2edf7cefd4e480ddabf8c9c48baad21fa404c46fe4fb5"],"repoTags":["localhost/my-image:functional-866562"],"size":"1640791"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause
:3.3"],"size":"487479"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf","docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"247562353"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4
c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:061d470c1ad66ac12ef70502f257dfb1771cb45ea840d875ef53781a61e81503","registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"72170321"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-866562 image ls --format json --alsologtostderr:
I0110 02:01:03.023719   31133 out.go:360] Setting OutFile to fd 1 ...
I0110 02:01:03.023920   31133 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:01:03.023925   31133 out.go:374] Setting ErrFile to fd 2...
I0110 02:01:03.023930   31133 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:01:03.024174   31133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
I0110 02:01:03.024823   31133 config.go:182] Loaded profile config "functional-866562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 02:01:03.024950   31133 config.go:182] Loaded profile config "functional-866562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 02:01:03.025436   31133 cli_runner.go:164] Run: docker container inspect functional-866562 --format={{.State.Status}}
I0110 02:01:03.053827   31133 ssh_runner.go:195] Run: systemctl --version
I0110 02:01:03.053875   31133 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-866562
I0110 02:01:03.071511   31133 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/functional-866562/id_rsa Username:docker}
I0110 02:01:03.174508   31133 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-866562 image ls --format yaml --alsologtostderr:
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 611c6647fcbbcffad724d5a5a85385d496c6b2a9c397459cb0c8316c40af5371
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:be49159753b31dc6d536fca5b044033e1e3e836667959ac238471b2ce50b31b0
- public.ecr.aws/nginx/nginx@sha256:a6fbdb4b73007c40f67bfc798a2045503b634f9c53e8309396e5aaf38c418ac0
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "62642350"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: c96ee3c17498748ccc544ba99ee8ffeb020fc335b230b43cd28bf43bed229a13
repoDigests:
- docker.io/kindest/kindnetd@sha256:f1260f5691195cc9a693dc0b55178aa724d944efd62486a8320f0583272b1fa3
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "108362109"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "247562353"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866562
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4788229"
- id: c3fcf259c473a57a5d7da116e29161904491091743512d27467c907c5516f856
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
- registry.k8s.io/kube-apiserver@sha256:bd1ea721ef1552db1884b5e8753c61667620556e5e0bfe6be8b32b6a77d7a16d
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "85015535"
- id: de369f46c2ff55c31ea783a663eb203caa820f3db1f9b9c935e79e7d1e9fd9e5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:817c21201edf58f5fe5be560c11178a250f7ba08a010a4cb73efcb0d98b467a5
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "74106775"
- id: ddc8422d4d35a6fc66c34be61e24df795e5cebf197eb546f62740d0bafef874f
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
- registry.k8s.io/kube-scheduler@sha256:36fe4e2d4335ff20aa335e673e7490151d57ffa753ef9282b8786930e6014ee3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "49822549"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 54cffed24179e3c80fc660546760e80906f9c57ecf6838b1b7b533ea046cad54
repoDigests:
- localhost/minikube-local-cache-test@sha256:0beed1c2a396093c7011db1b63554cfd7d1eb49a37ddf5ba8c2583a6de29ed62
repoTags:
- localhost/minikube-local-cache-test:functional-866562
size: "3330"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "42263767"
- id: 271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
- registry.k8s.io/etcd@sha256:aa0d8bc8f6a6c3655b8efe0a10c5bf052f5574ebe13f904c5b0c9002ce4b2561
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "60850387"
- id: 88898f1d1a62a3ea9db5d4d099dee7aa52ebe8191016c5b3c721388a309983e0
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:061d470c1ad66ac12ef70502f257dfb1771cb45ea840d875ef53781a61e81503
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "72170321"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-866562 image ls --format yaml --alsologtostderr:
I0110 02:00:58.611386   30710 out.go:360] Setting OutFile to fd 1 ...
I0110 02:00:58.611748   30710 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:00:58.611781   30710 out.go:374] Setting ErrFile to fd 2...
I0110 02:00:58.611817   30710 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:00:58.612325   30710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
I0110 02:00:58.614339   30710 config.go:182] Loaded profile config "functional-866562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 02:00:58.614569   30710 config.go:182] Loaded profile config "functional-866562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 02:00:58.615376   30710 cli_runner.go:164] Run: docker container inspect functional-866562 --format={{.State.Status}}
I0110 02:00:58.648387   30710 ssh_runner.go:195] Run: systemctl --version
I0110 02:00:58.648465   30710 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-866562
I0110 02:00:58.674193   30710 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/functional-866562/id_rsa Username:docker}
I0110 02:00:58.791547   30710 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-866562 ssh pgrep buildkitd: exit status 1 (339.482338ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 image build -t localhost/my-image:functional-866562 testdata/build --alsologtostderr
E0110 02:01:01.891961    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:01.897285    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:01.907637    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:01.927951    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:01.968467    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:02.049098    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:02.209433    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:02.530150    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-866562 image build -t localhost/my-image:functional-866562 testdata/build --alsologtostderr: (3.558791994s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-866562 image build -t localhost/my-image:functional-866562 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 56734e6a6fb
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-866562
--> 82406f5b8f4
Successfully tagged localhost/my-image:functional-866562
82406f5b8f4eb3c57c1a79e7f3cf0399f3b5ee29b220d9465549278f67429346
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-866562 image build -t localhost/my-image:functional-866562 testdata/build --alsologtostderr:
I0110 02:00:59.245066   30828 out.go:360] Setting OutFile to fd 1 ...
I0110 02:00:59.245319   30828 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:00:59.245347   30828 out.go:374] Setting ErrFile to fd 2...
I0110 02:00:59.245367   30828 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 02:00:59.245653   30828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
I0110 02:00:59.246323   30828 config.go:182] Loaded profile config "functional-866562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 02:00:59.247013   30828 config.go:182] Loaded profile config "functional-866562": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 02:00:59.247597   30828 cli_runner.go:164] Run: docker container inspect functional-866562 --format={{.State.Status}}
I0110 02:00:59.269848   30828 ssh_runner.go:195] Run: systemctl --version
I0110 02:00:59.269908   30828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-866562
I0110 02:00:59.297437   30828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/functional-866562/id_rsa Username:docker}
I0110 02:00:59.402474   30828 build_images.go:162] Building image from path: /tmp/build.1338425615.tar
I0110 02:00:59.402550   30828 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0110 02:00:59.413237   30828 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1338425615.tar
I0110 02:00:59.417578   30828 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1338425615.tar: stat -c "%s %y" /var/lib/minikube/build/build.1338425615.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1338425615.tar': No such file or directory
I0110 02:00:59.417605   30828 ssh_runner.go:362] scp /tmp/build.1338425615.tar --> /var/lib/minikube/build/build.1338425615.tar (3072 bytes)
I0110 02:00:59.440760   30828 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1338425615
I0110 02:00:59.450166   30828 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1338425615 -xf /var/lib/minikube/build/build.1338425615.tar
I0110 02:00:59.462250   30828 crio.go:315] Building image: /var/lib/minikube/build/build.1338425615
I0110 02:00:59.462403   30828 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-866562 /var/lib/minikube/build/build.1338425615 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0110 02:01:02.716713   30828 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-866562 /var/lib/minikube/build/build.1338425615 --cgroup-manager=cgroupfs: (3.254270953s)
I0110 02:01:02.716771   30828 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1338425615
I0110 02:01:02.724624   30828 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1338425615.tar
I0110 02:01:02.732591   30828 build_images.go:218] Built localhost/my-image:functional-866562 from /tmp/build.1338425615.tar
I0110 02:01:02.732621   30828 build_images.go:134] succeeded building to: functional-866562
I0110 02:01:02.732627   30828 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0: (1.94788687s)
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866562
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866562 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-866562 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866562 --alsologtostderr: (1.257542118s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866562 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-866562 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-866562 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-866562 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-866562 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 26542: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-866562 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-866562 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [8cdab78a-f4c6-4a6e-a39f-81eb6745d349] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [8cdab78a-f4c6-4a6e-a39f-81eb6745d349] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003077659s
I0110 02:00:27.812625    4168 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866562
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866562 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-arm64 -p functional-866562 image ls: (1.334559273s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866562 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866562 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866562
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866562 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866562
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-866562 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.10.89 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-866562 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-866562 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-866562 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-8v96l" [b81d4b65-d4e4-42cc-b236-f7b4d1c50d9c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-8v96l" [b81d4b65-d4e4-42cc-b236-f7b4d1c50d9c] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.058183807s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 service list -o json
functional_test.go:1509: Took "519.365853ms" to run "out/minikube-linux-arm64 -p functional-866562 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:32394
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:32394
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-866562 /tmp/TestFunctionalparallelMountCmdany-port206372692/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1768010444330023347" to /tmp/TestFunctionalparallelMountCmdany-port206372692/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1768010444330023347" to /tmp/TestFunctionalparallelMountCmdany-port206372692/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1768010444330023347" to /tmp/TestFunctionalparallelMountCmdany-port206372692/001/test-1768010444330023347
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-866562 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (447.394362ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0110 02:00:44.777742    4168 retry.go:84] will retry after 700ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 10 02:00 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 10 02:00 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 10 02:00 test-1768010444330023347
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh cat /mount-9p/test-1768010444330023347
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-866562 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [42d5cbe2-03b7-4c3b-b450-77ac73a1e17a] Pending
helpers_test.go:353: "busybox-mount" [42d5cbe2-03b7-4c3b-b450-77ac73a1e17a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [42d5cbe2-03b7-4c3b-b450-77ac73a1e17a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [42d5cbe2-03b7-4c3b-b450-77ac73a1e17a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.002996879s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-866562 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-866562 /tmp/TestFunctionalparallelMountCmdany-port206372692/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.76s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1335: Took "460.193687ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1349: Took "108.980912ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1386: Took "424.530743ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1399: Took "83.669534ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-866562 /tmp/TestFunctionalparallelMountCmdspecific-port430923870/001:/mount-9p --alsologtostderr -v=1 --port 46377]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-866562 /tmp/TestFunctionalparallelMountCmdspecific-port430923870/001:/mount-9p --alsologtostderr -v=1 --port 46377] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-866562 ssh "sudo umount -f /mount-9p": exit status 1 (359.255743ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-arm64 -p functional-866562 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-866562 /tmp/TestFunctionalparallelMountCmdspecific-port430923870/001:/mount-9p --alsologtostderr -v=1 --port 46377] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-866562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2485519526/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-866562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2485519526/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-866562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2485519526/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-866562 ssh "findmnt -T" /mount1: exit status 1 (1.144177258s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0110 02:00:54.738122    4168 retry.go:84] will retry after 600ms: exit status 1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-arm64 -p functional-866562 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-866562 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-866562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2485519526/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-866562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2485519526/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-866562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2485519526/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.84s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-866562
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-866562
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-866562
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (160.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0110 02:01:12.133011    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:22.374071    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:01:42.854914    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:02:23.815541    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:03:45.736669    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-103644 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m39.786453895s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (160.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-103644 kubectl -- rollout status deployment/busybox: (4.472476436s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- exec busybox-769dd8b7dd-4bm2x -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- exec busybox-769dd8b7dd-q74nq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- exec busybox-769dd8b7dd-wldsd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- exec busybox-769dd8b7dd-4bm2x -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- exec busybox-769dd8b7dd-q74nq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- exec busybox-769dd8b7dd-wldsd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- exec busybox-769dd8b7dd-4bm2x -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- exec busybox-769dd8b7dd-q74nq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- exec busybox-769dd8b7dd-wldsd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- exec busybox-769dd8b7dd-4bm2x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- exec busybox-769dd8b7dd-4bm2x -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- exec busybox-769dd8b7dd-q74nq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- exec busybox-769dd8b7dd-q74nq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- exec busybox-769dd8b7dd-wldsd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 kubectl -- exec busybox-769dd8b7dd-wldsd -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (29.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-103644 node add --alsologtostderr -v 5: (28.856548766s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-103644 status --alsologtostderr -v 5: (1.074398729s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (29.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-103644 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.019719528s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-103644 status --output json --alsologtostderr -v 5: (1.01400711s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp testdata/cp-test.txt ha-103644:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp ha-103644:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile929191865/001/cp-test_ha-103644.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp ha-103644:/home/docker/cp-test.txt ha-103644-m02:/home/docker/cp-test_ha-103644_ha-103644-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m02 "sudo cat /home/docker/cp-test_ha-103644_ha-103644-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp ha-103644:/home/docker/cp-test.txt ha-103644-m03:/home/docker/cp-test_ha-103644_ha-103644-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m03 "sudo cat /home/docker/cp-test_ha-103644_ha-103644-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp ha-103644:/home/docker/cp-test.txt ha-103644-m04:/home/docker/cp-test_ha-103644_ha-103644-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m04 "sudo cat /home/docker/cp-test_ha-103644_ha-103644-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp testdata/cp-test.txt ha-103644-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp ha-103644-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile929191865/001/cp-test_ha-103644-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp ha-103644-m02:/home/docker/cp-test.txt ha-103644:/home/docker/cp-test_ha-103644-m02_ha-103644.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644 "sudo cat /home/docker/cp-test_ha-103644-m02_ha-103644.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp ha-103644-m02:/home/docker/cp-test.txt ha-103644-m03:/home/docker/cp-test_ha-103644-m02_ha-103644-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m03 "sudo cat /home/docker/cp-test_ha-103644-m02_ha-103644-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp ha-103644-m02:/home/docker/cp-test.txt ha-103644-m04:/home/docker/cp-test_ha-103644-m02_ha-103644-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m04 "sudo cat /home/docker/cp-test_ha-103644-m02_ha-103644-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp testdata/cp-test.txt ha-103644-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp ha-103644-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile929191865/001/cp-test_ha-103644-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp ha-103644-m03:/home/docker/cp-test.txt ha-103644:/home/docker/cp-test_ha-103644-m03_ha-103644.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644 "sudo cat /home/docker/cp-test_ha-103644-m03_ha-103644.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp ha-103644-m03:/home/docker/cp-test.txt ha-103644-m02:/home/docker/cp-test_ha-103644-m03_ha-103644-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m02 "sudo cat /home/docker/cp-test_ha-103644-m03_ha-103644-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp ha-103644-m03:/home/docker/cp-test.txt ha-103644-m04:/home/docker/cp-test_ha-103644-m03_ha-103644-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m04 "sudo cat /home/docker/cp-test_ha-103644-m03_ha-103644-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp testdata/cp-test.txt ha-103644-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp ha-103644-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile929191865/001/cp-test_ha-103644-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp ha-103644-m04:/home/docker/cp-test.txt ha-103644:/home/docker/cp-test_ha-103644-m04_ha-103644.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644 "sudo cat /home/docker/cp-test_ha-103644-m04_ha-103644.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp ha-103644-m04:/home/docker/cp-test.txt ha-103644-m02:/home/docker/cp-test_ha-103644-m04_ha-103644-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m02 "sudo cat /home/docker/cp-test_ha-103644-m04_ha-103644-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 cp ha-103644-m04:/home/docker/cp-test.txt ha-103644-m03:/home/docker/cp-test_ha-103644-m04_ha-103644-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 ssh -n ha-103644-m03 "sudo cat /home/docker/cp-test_ha-103644-m04_ha-103644-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-103644 node stop m02 --alsologtostderr -v 5: (12.053152155s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-103644 status --alsologtostderr -v 5: exit status 7 (859.333888ms)

                                                
                                                
-- stdout --
	ha-103644
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-103644-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-103644-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-103644-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:05:01.083083   45830 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:05:01.083205   45830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:05:01.083219   45830 out.go:374] Setting ErrFile to fd 2...
	I0110 02:05:01.083224   45830 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:05:01.083466   45830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:05:01.083654   45830 out.go:368] Setting JSON to false
	I0110 02:05:01.083690   45830 mustload.go:66] Loading cluster: ha-103644
	I0110 02:05:01.083760   45830 notify.go:221] Checking for updates...
	I0110 02:05:01.084710   45830 config.go:182] Loaded profile config "ha-103644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:05:01.084737   45830 status.go:174] checking status of ha-103644 ...
	I0110 02:05:01.085489   45830 cli_runner.go:164] Run: docker container inspect ha-103644 --format={{.State.Status}}
	I0110 02:05:01.105787   45830 status.go:371] ha-103644 host status = "Running" (err=<nil>)
	I0110 02:05:01.105828   45830 host.go:66] Checking if "ha-103644" exists ...
	I0110 02:05:01.106251   45830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-103644
	I0110 02:05:01.135055   45830 host.go:66] Checking if "ha-103644" exists ...
	I0110 02:05:01.135377   45830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:05:01.135431   45830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-103644
	I0110 02:05:01.163951   45830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/ha-103644/id_rsa Username:docker}
	I0110 02:05:01.282222   45830 ssh_runner.go:195] Run: systemctl --version
	I0110 02:05:01.291730   45830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:05:01.308077   45830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:05:01.383736   45830 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2026-01-10 02:05:01.37130471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:05:01.384487   45830 kubeconfig.go:125] found "ha-103644" server: "https://192.168.49.254:8443"
	I0110 02:05:01.384538   45830 api_server.go:166] Checking apiserver status ...
	I0110 02:05:01.384606   45830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:05:01.399430   45830 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1235/cgroup
	I0110 02:05:01.409106   45830 api_server.go:192] apiserver freezer: "9:freezer:/docker/12e262e663dabebb387ab5c2fe1b9805be9cbec5132313052f0c3e0b33b4ec93/crio/crio-38c3e5b634d688f5f0eba30461276597db0bdf285609fee8e9f4b3d91bdca308"
	I0110 02:05:01.409226   45830 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/12e262e663dabebb387ab5c2fe1b9805be9cbec5132313052f0c3e0b33b4ec93/crio/crio-38c3e5b634d688f5f0eba30461276597db0bdf285609fee8e9f4b3d91bdca308/freezer.state
	I0110 02:05:01.419271   45830 api_server.go:214] freezer state: "THAWED"
	I0110 02:05:01.419364   45830 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0110 02:05:01.430964   45830 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0110 02:05:01.430998   45830 status.go:463] ha-103644 apiserver status = Running (err=<nil>)
	I0110 02:05:01.431010   45830 status.go:176] ha-103644 status: &{Name:ha-103644 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:05:01.431065   45830 status.go:174] checking status of ha-103644-m02 ...
	I0110 02:05:01.431421   45830 cli_runner.go:164] Run: docker container inspect ha-103644-m02 --format={{.State.Status}}
	I0110 02:05:01.456236   45830 status.go:371] ha-103644-m02 host status = "Stopped" (err=<nil>)
	I0110 02:05:01.456271   45830 status.go:384] host is not running, skipping remaining checks
	I0110 02:05:01.456279   45830 status.go:176] ha-103644-m02 status: &{Name:ha-103644-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:05:01.456298   45830 status.go:174] checking status of ha-103644-m03 ...
	I0110 02:05:01.456666   45830 cli_runner.go:164] Run: docker container inspect ha-103644-m03 --format={{.State.Status}}
	I0110 02:05:01.478333   45830 status.go:371] ha-103644-m03 host status = "Running" (err=<nil>)
	I0110 02:05:01.478436   45830 host.go:66] Checking if "ha-103644-m03" exists ...
	I0110 02:05:01.478793   45830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-103644-m03
	I0110 02:05:01.504448   45830 host.go:66] Checking if "ha-103644-m03" exists ...
	I0110 02:05:01.504998   45830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:05:01.505096   45830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-103644-m03
	I0110 02:05:01.525605   45830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/ha-103644-m03/id_rsa Username:docker}
	I0110 02:05:01.633405   45830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:05:01.651337   45830 kubeconfig.go:125] found "ha-103644" server: "https://192.168.49.254:8443"
	I0110 02:05:01.651453   45830 api_server.go:166] Checking apiserver status ...
	I0110 02:05:01.651545   45830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:05:01.669472   45830 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1188/cgroup
	I0110 02:05:01.690935   45830 api_server.go:192] apiserver freezer: "9:freezer:/docker/ef4f8b4a3d7ce688fa9cc0fdee83362e408bdcdfbc44b5d53ac020dd674b8765/crio/crio-8bada4f83f2b0ba293232f5278844e84b4c05e87a44089b32e3ed479c643f1a8"
	I0110 02:05:01.691035   45830 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ef4f8b4a3d7ce688fa9cc0fdee83362e408bdcdfbc44b5d53ac020dd674b8765/crio/crio-8bada4f83f2b0ba293232f5278844e84b4c05e87a44089b32e3ed479c643f1a8/freezer.state
	I0110 02:05:01.699664   45830 api_server.go:214] freezer state: "THAWED"
	I0110 02:05:01.699696   45830 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0110 02:05:01.708083   45830 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0110 02:05:01.708112   45830 status.go:463] ha-103644-m03 apiserver status = Running (err=<nil>)
	I0110 02:05:01.708122   45830 status.go:176] ha-103644-m03 status: &{Name:ha-103644-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:05:01.708140   45830 status.go:174] checking status of ha-103644-m04 ...
	I0110 02:05:01.708460   45830 cli_runner.go:164] Run: docker container inspect ha-103644-m04 --format={{.State.Status}}
	I0110 02:05:01.728607   45830 status.go:371] ha-103644-m04 host status = "Running" (err=<nil>)
	I0110 02:05:01.728631   45830 host.go:66] Checking if "ha-103644-m04" exists ...
	I0110 02:05:01.729926   45830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-103644-m04
	I0110 02:05:01.750130   45830 host.go:66] Checking if "ha-103644-m04" exists ...
	I0110 02:05:01.750466   45830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:05:01.750516   45830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-103644-m04
	I0110 02:05:01.769927   45830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/ha-103644-m04/id_rsa Username:docker}
	I0110 02:05:01.873432   45830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:05:01.888495   45830 status.go:176] ha-103644-m04 status: &{Name:ha-103644-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (22.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 node start m02 --alsologtostderr -v 5
E0110 02:05:18.347419    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:05:18.352848    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:05:18.363094    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:05:18.383366    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:05:18.423661    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:05:18.503997    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:05:18.665024    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:05:18.985294    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:05:19.626321    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:05:20.907201    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:05:23.468007    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-103644 node start m02 --alsologtostderr -v 5: (20.966684278s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-103644 status --alsologtostderr -v 5: (1.284307595s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (22.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.211228786s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (110.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 stop --alsologtostderr -v 5
E0110 02:05:28.588501    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:05:38.829031    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-103644 stop --alsologtostderr -v 5: (27.567037933s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 start --wait true --alsologtostderr -v 5
E0110 02:05:59.309299    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:06:01.892327    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:06:29.577502    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:06:40.269508    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-103644 start --wait true --alsologtostderr -v 5: (1m22.355215291s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (110.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-103644 node delete m03 --alsologtostderr -v 5: (10.027401279s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 stop --alsologtostderr -v 5
E0110 02:08:02.190506    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-103644 stop --alsologtostderr -v 5: (35.894476074s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-103644 status --alsologtostderr -v 5: exit status 7 (108.131716ms)

                                                
                                                
-- stdout --
	ha-103644
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-103644-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-103644-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:08:04.150926   57770 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:08:04.151105   57770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:08:04.151135   57770 out.go:374] Setting ErrFile to fd 2...
	I0110 02:08:04.151155   57770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:08:04.151564   57770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:08:04.151883   57770 out.go:368] Setting JSON to false
	I0110 02:08:04.151934   57770 mustload.go:66] Loading cluster: ha-103644
	I0110 02:08:04.153173   57770 config.go:182] Loaded profile config "ha-103644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:08:04.153233   57770 status.go:174] checking status of ha-103644 ...
	I0110 02:08:04.153358   57770 notify.go:221] Checking for updates...
	I0110 02:08:04.153819   57770 cli_runner.go:164] Run: docker container inspect ha-103644 --format={{.State.Status}}
	I0110 02:08:04.173827   57770 status.go:371] ha-103644 host status = "Stopped" (err=<nil>)
	I0110 02:08:04.173848   57770 status.go:384] host is not running, skipping remaining checks
	I0110 02:08:04.173854   57770 status.go:176] ha-103644 status: &{Name:ha-103644 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:08:04.173882   57770 status.go:174] checking status of ha-103644-m02 ...
	I0110 02:08:04.174197   57770 cli_runner.go:164] Run: docker container inspect ha-103644-m02 --format={{.State.Status}}
	I0110 02:08:04.197152   57770 status.go:371] ha-103644-m02 host status = "Stopped" (err=<nil>)
	I0110 02:08:04.197172   57770 status.go:384] host is not running, skipping remaining checks
	I0110 02:08:04.197178   57770 status.go:176] ha-103644-m02 status: &{Name:ha-103644-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:08:04.197196   57770 status.go:174] checking status of ha-103644-m04 ...
	I0110 02:08:04.197491   57770 cli_runner.go:164] Run: docker container inspect ha-103644-m04 --format={{.State.Status}}
	I0110 02:08:04.214525   57770 status.go:371] ha-103644-m04 host status = "Stopped" (err=<nil>)
	I0110 02:08:04.214545   57770 status.go:384] host is not running, skipping remaining checks
	I0110 02:08:04.214551   57770 status.go:176] ha-103644-m04 status: &{Name:ha-103644-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (78.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-103644 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m17.609041547s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (78.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (52.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-103644 node add --control-plane --alsologtostderr -v 5: (51.423120736s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-103644 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-103644 status --alsologtostderr -v 5: (1.080472088s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (52.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.034627161s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (45.76s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-428708 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E0110 02:10:46.031586    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:11:01.892565    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-428708 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (45.754005199s)
--- PASS: TestJSONOutput/start/Command (45.76s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-428708 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-428708 --output=json --user=testUser: (5.847451484s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-895734 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-895734 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (85.370527ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c285bd77-ee75-4578-9ac2-53241a77c813","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-895734] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"093be79e-c2ef-4ae7-88aa-122c93a754d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22414"}}
	{"specversion":"1.0","id":"43afb081-4de2-463b-b312-d813a587a97d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7a240c01-5b9d-4572-b7fa-d4cb99c2f678","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig"}}
	{"specversion":"1.0","id":"cbbadac0-26b3-48a6-9c60-ddc2d87b2bcd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube"}}
	{"specversion":"1.0","id":"6bb46003-8a3f-48c4-ba6f-a5edddb4be23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"edc69e00-60e9-4f98-ba89-23741ef85c16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4516837f-758c-4991-abd0-e9cfc760e73a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-895734" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-895734
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.47s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-905464 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-905464 --network=: (31.951000929s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-905464" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-905464
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-905464: (2.486583057s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.47s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.37s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-640997 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-640997 --network=bridge: (28.260176549s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-640997" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-640997
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-640997: (2.080581151s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.37s)

                                                
                                    
x
+
TestKicExistingNetwork (29.87s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0110 02:12:31.250448    4168 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0110 02:12:31.283298    4168 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0110 02:12:31.284233    4168 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0110 02:12:31.284268    4168 cli_runner.go:164] Run: docker network inspect existing-network
W0110 02:12:31.309334    4168 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0110 02:12:31.309366    4168 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0110 02:12:31.309379    4168 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0110 02:12:31.309474    4168 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0110 02:12:31.338423    4168 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-dba69832168e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:92:28:cf:b8:5c:6c} reservation:<nil>}
I0110 02:12:31.338735    4168 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bc3b40}
I0110 02:12:31.338757    4168 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0110 02:12:31.338805    4168 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0110 02:12:31.421404    4168 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-426644 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-426644 --network=existing-network: (27.564112961s)
helpers_test.go:176: Cleaning up "existing-network-426644" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-426644
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-426644: (2.102687846s)
I0110 02:13:01.106763    4168 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (29.87s)

                                                
                                    
x
+
TestKicCustomSubnet (30.15s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-144320 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-144320 --subnet=192.168.60.0/24: (28.038889923s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-144320 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-144320" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-144320
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-144320: (2.09566482s)
--- PASS: TestKicCustomSubnet (30.15s)

                                                
                                    
x
+
TestKicStaticIP (30.19s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-970161 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-970161 --static-ip=192.168.200.200: (27.929957752s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-970161 ip
helpers_test.go:176: Cleaning up "static-ip-970161" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-970161
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-970161: (2.111862076s)
--- PASS: TestKicStaticIP (30.19s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (59.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-235599 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-235599 --driver=docker  --container-runtime=crio: (25.79218713s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-237929 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-237929 --driver=docker  --container-runtime=crio: (28.33624791s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-235599
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-237929
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-237929" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-237929
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-237929: (2.027244234s)
helpers_test.go:176: Cleaning up "first-235599" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-235599
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-235599: (2.351889809s)
--- PASS: TestMinikubeProfile (59.98s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-685921 --memory=3072 --mount-string /tmp/TestMountStartserial3920643771/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-685921 --memory=3072 --mount-string /tmp/TestMountStartserial3920643771/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.735964804s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-685921 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-687718 --memory=3072 --mount-string /tmp/TestMountStartserial3920643771/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E0110 02:15:18.352865    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-687718 --memory=3072 --mount-string /tmp/TestMountStartserial3920643771/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.660739185s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-687718 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-685921 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-685921 --alsologtostderr -v=5: (1.694988945s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-687718 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-687718
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-687718: (1.28349443s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.98s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-687718
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-687718: (6.978836545s)
--- PASS: TestMountStart/serial/RestartStopped (7.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-687718 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (73.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-940034 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0110 02:16:01.891931    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-940034 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m12.842089733s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (73.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-940034 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-940034 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-940034 -- rollout status deployment/busybox: (3.347560584s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-940034 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-940034 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-940034 -- exec busybox-769dd8b7dd-4zdzm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-940034 -- exec busybox-769dd8b7dd-j8fbg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-940034 -- exec busybox-769dd8b7dd-4zdzm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-940034 -- exec busybox-769dd8b7dd-j8fbg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-940034 -- exec busybox-769dd8b7dd-4zdzm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-940034 -- exec busybox-769dd8b7dd-j8fbg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.11s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-940034 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-940034 -- exec busybox-769dd8b7dd-4zdzm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-940034 -- exec busybox-769dd8b7dd-4zdzm -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-940034 -- exec busybox-769dd8b7dd-j8fbg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-940034 -- exec busybox-769dd8b7dd-j8fbg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-940034 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-940034 -v=5 --alsologtostderr: (28.452734294s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.17s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-940034 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 cp testdata/cp-test.txt multinode-940034:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 ssh -n multinode-940034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 cp multinode-940034:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2448367549/001/cp-test_multinode-940034.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 ssh -n multinode-940034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 cp multinode-940034:/home/docker/cp-test.txt multinode-940034-m02:/home/docker/cp-test_multinode-940034_multinode-940034-m02.txt
E0110 02:17:24.937887    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 ssh -n multinode-940034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 ssh -n multinode-940034-m02 "sudo cat /home/docker/cp-test_multinode-940034_multinode-940034-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 cp multinode-940034:/home/docker/cp-test.txt multinode-940034-m03:/home/docker/cp-test_multinode-940034_multinode-940034-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 ssh -n multinode-940034 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 ssh -n multinode-940034-m03 "sudo cat /home/docker/cp-test_multinode-940034_multinode-940034-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 cp testdata/cp-test.txt multinode-940034-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 ssh -n multinode-940034-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 cp multinode-940034-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2448367549/001/cp-test_multinode-940034-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 ssh -n multinode-940034-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 cp multinode-940034-m02:/home/docker/cp-test.txt multinode-940034:/home/docker/cp-test_multinode-940034-m02_multinode-940034.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 ssh -n multinode-940034-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 ssh -n multinode-940034 "sudo cat /home/docker/cp-test_multinode-940034-m02_multinode-940034.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 cp multinode-940034-m02:/home/docker/cp-test.txt multinode-940034-m03:/home/docker/cp-test_multinode-940034-m02_multinode-940034-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 ssh -n multinode-940034-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 ssh -n multinode-940034-m03 "sudo cat /home/docker/cp-test_multinode-940034-m02_multinode-940034-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 cp testdata/cp-test.txt multinode-940034-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 ssh -n multinode-940034-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 cp multinode-940034-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2448367549/001/cp-test_multinode-940034-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 ssh -n multinode-940034-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 cp multinode-940034-m03:/home/docker/cp-test.txt multinode-940034:/home/docker/cp-test_multinode-940034-m03_multinode-940034.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 ssh -n multinode-940034-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 ssh -n multinode-940034 "sudo cat /home/docker/cp-test_multinode-940034-m03_multinode-940034.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 cp multinode-940034-m03:/home/docker/cp-test.txt multinode-940034-m02:/home/docker/cp-test_multinode-940034-m03_multinode-940034-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 ssh -n multinode-940034-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 ssh -n multinode-940034-m02 "sudo cat /home/docker/cp-test_multinode-940034-m03_multinode-940034-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-940034 node stop m03: (1.314284198s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-940034 status: exit status 7 (534.166703ms)

                                                
                                                
-- stdout --
	multinode-940034
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-940034-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-940034-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-940034 status --alsologtostderr: exit status 7 (530.492172ms)

                                                
                                                
-- stdout --
	multinode-940034
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-940034-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-940034-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:17:35.161742  108302 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:17:35.161925  108302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:17:35.161950  108302 out.go:374] Setting ErrFile to fd 2...
	I0110 02:17:35.162003  108302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:17:35.162314  108302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:17:35.162554  108302 out.go:368] Setting JSON to false
	I0110 02:17:35.162617  108302 mustload.go:66] Loading cluster: multinode-940034
	I0110 02:17:35.162661  108302 notify.go:221] Checking for updates...
	I0110 02:17:35.163107  108302 config.go:182] Loaded profile config "multinode-940034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:17:35.163444  108302 status.go:174] checking status of multinode-940034 ...
	I0110 02:17:35.164120  108302 cli_runner.go:164] Run: docker container inspect multinode-940034 --format={{.State.Status}}
	I0110 02:17:35.181724  108302 status.go:371] multinode-940034 host status = "Running" (err=<nil>)
	I0110 02:17:35.181746  108302 host.go:66] Checking if "multinode-940034" exists ...
	I0110 02:17:35.182049  108302 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-940034
	I0110 02:17:35.201438  108302 host.go:66] Checking if "multinode-940034" exists ...
	I0110 02:17:35.201747  108302 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:17:35.201797  108302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-940034
	I0110 02:17:35.228610  108302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/multinode-940034/id_rsa Username:docker}
	I0110 02:17:35.334455  108302 ssh_runner.go:195] Run: systemctl --version
	I0110 02:17:35.341518  108302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:17:35.355039  108302 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:17:35.417036  108302 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2026-01-10 02:17:35.407117033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:17:35.417556  108302 kubeconfig.go:125] found "multinode-940034" server: "https://192.168.67.2:8443"
	I0110 02:17:35.417596  108302 api_server.go:166] Checking apiserver status ...
	I0110 02:17:35.417656  108302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 02:17:35.429011  108302 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1228/cgroup
	I0110 02:17:35.439014  108302 api_server.go:192] apiserver freezer: "9:freezer:/docker/68b4780a5815bdff2990f8976f9bfe3a0d5d35af6259ee9fac138ab0276548f3/crio/crio-ef100ca4f6057ecdb8a3a6aa9e3fe3a248e95b9762e12a9364ad8c005bda84e3"
	I0110 02:17:35.439090  108302 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/68b4780a5815bdff2990f8976f9bfe3a0d5d35af6259ee9fac138ab0276548f3/crio/crio-ef100ca4f6057ecdb8a3a6aa9e3fe3a248e95b9762e12a9364ad8c005bda84e3/freezer.state
	I0110 02:17:35.446497  108302 api_server.go:214] freezer state: "THAWED"
	I0110 02:17:35.446526  108302 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0110 02:17:35.454805  108302 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0110 02:17:35.454845  108302 status.go:463] multinode-940034 apiserver status = Running (err=<nil>)
	I0110 02:17:35.454859  108302 status.go:176] multinode-940034 status: &{Name:multinode-940034 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:17:35.454875  108302 status.go:174] checking status of multinode-940034-m02 ...
	I0110 02:17:35.455188  108302 cli_runner.go:164] Run: docker container inspect multinode-940034-m02 --format={{.State.Status}}
	I0110 02:17:35.472363  108302 status.go:371] multinode-940034-m02 host status = "Running" (err=<nil>)
	I0110 02:17:35.472387  108302 host.go:66] Checking if "multinode-940034-m02" exists ...
	I0110 02:17:35.472689  108302 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-940034-m02
	I0110 02:17:35.489564  108302 host.go:66] Checking if "multinode-940034-m02" exists ...
	I0110 02:17:35.489900  108302 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 02:17:35.489947  108302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-940034-m02
	I0110 02:17:35.507690  108302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22414-2353/.minikube/machines/multinode-940034-m02/id_rsa Username:docker}
	I0110 02:17:35.609561  108302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 02:17:35.622431  108302 status.go:176] multinode-940034-m02 status: &{Name:multinode-940034-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:17:35.622520  108302 status.go:174] checking status of multinode-940034-m03 ...
	I0110 02:17:35.622856  108302 cli_runner.go:164] Run: docker container inspect multinode-940034-m03 --format={{.State.Status}}
	I0110 02:17:35.641009  108302 status.go:371] multinode-940034-m03 host status = "Stopped" (err=<nil>)
	I0110 02:17:35.641033  108302 status.go:384] host is not running, skipping remaining checks
	I0110 02:17:35.641040  108302 status.go:176] multinode-940034-m03 status: &{Name:multinode-940034-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-940034 node start m03 -v=5 --alsologtostderr: (7.718188923s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-940034
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-940034
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-940034: (25.098475257s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-940034 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-940034 --wait=true -v=5 --alsologtostderr: (55.864921504s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-940034
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-940034 node delete m03: (4.691088023s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.43s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-940034 stop: (23.801345681s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-940034 status: exit status 7 (92.299943ms)

                                                
                                                
-- stdout --
	multinode-940034
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-940034-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-940034 status --alsologtostderr: exit status 7 (98.991463ms)

                                                
                                                
-- stdout --
	multinode-940034
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-940034-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:19:34.631456  116208 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:19:34.631593  116208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:19:34.631604  116208 out.go:374] Setting ErrFile to fd 2...
	I0110 02:19:34.631609  116208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:19:34.631993  116208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:19:34.632226  116208 out.go:368] Setting JSON to false
	I0110 02:19:34.632250  116208 mustload.go:66] Loading cluster: multinode-940034
	I0110 02:19:34.632897  116208 config.go:182] Loaded profile config "multinode-940034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:19:34.632917  116208 status.go:174] checking status of multinode-940034 ...
	I0110 02:19:34.633626  116208 cli_runner.go:164] Run: docker container inspect multinode-940034 --format={{.State.Status}}
	I0110 02:19:34.634997  116208 notify.go:221] Checking for updates...
	I0110 02:19:34.654088  116208 status.go:371] multinode-940034 host status = "Stopped" (err=<nil>)
	I0110 02:19:34.654108  116208 status.go:384] host is not running, skipping remaining checks
	I0110 02:19:34.654114  116208 status.go:176] multinode-940034 status: &{Name:multinode-940034 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 02:19:34.654145  116208 status.go:174] checking status of multinode-940034-m02 ...
	I0110 02:19:34.654451  116208 cli_runner.go:164] Run: docker container inspect multinode-940034-m02 --format={{.State.Status}}
	I0110 02:19:34.685309  116208 status.go:371] multinode-940034-m02 host status = "Stopped" (err=<nil>)
	I0110 02:19:34.685333  116208 status.go:384] host is not running, skipping remaining checks
	I0110 02:19:34.685341  116208 status.go:176] multinode-940034-m02 status: &{Name:multinode-940034-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-940034 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0110 02:20:18.346978    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-940034 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (48.340739264s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-940034 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.01s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-940034
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-940034-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-940034-m02 --driver=docker  --container-runtime=crio: exit status 14 (106.525994ms)

                                                
                                                
-- stdout --
	* [multinode-940034-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-940034-m02' is duplicated with machine name 'multinode-940034-m02' in profile 'multinode-940034'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-940034-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-940034-m03 --driver=docker  --container-runtime=crio: (29.046892983s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-940034
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-940034: exit status 80 (348.299276ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-940034 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-940034-m03 already exists in multinode-940034-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-940034-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-940034-m03: (2.122497984s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.67s)

                                                
                                    
x
+
TestScheduledStopUnix (104.24s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-325096 --memory=3072 --driver=docker  --container-runtime=crio
E0110 02:21:01.893331    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-325096 --memory=3072 --driver=docker  --container-runtime=crio: (27.98254251s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-325096 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 02:21:27.673320  124696 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:21:27.673428  124696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:21:27.673438  124696 out.go:374] Setting ErrFile to fd 2...
	I0110 02:21:27.673444  124696 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:21:27.673710  124696 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:21:27.673965  124696 out.go:368] Setting JSON to false
	I0110 02:21:27.674073  124696 mustload.go:66] Loading cluster: scheduled-stop-325096
	I0110 02:21:27.674422  124696 config.go:182] Loaded profile config "scheduled-stop-325096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:21:27.674496  124696 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/scheduled-stop-325096/config.json ...
	I0110 02:21:27.674697  124696 mustload.go:66] Loading cluster: scheduled-stop-325096
	I0110 02:21:27.674821  124696 config.go:182] Loaded profile config "scheduled-stop-325096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-325096 -n scheduled-stop-325096
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-325096 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 02:21:28.174048  124788 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:21:28.174173  124788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:21:28.174184  124788 out.go:374] Setting ErrFile to fd 2...
	I0110 02:21:28.174189  124788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:21:28.174451  124788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:21:28.174715  124788 out.go:368] Setting JSON to false
	I0110 02:21:28.175715  124788 daemonize_unix.go:73] killing process 124713 as it is an old scheduled stop
	I0110 02:21:28.175902  124788 mustload.go:66] Loading cluster: scheduled-stop-325096
	I0110 02:21:28.176278  124788 config.go:182] Loaded profile config "scheduled-stop-325096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:21:28.176356  124788 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/scheduled-stop-325096/config.json ...
	I0110 02:21:28.176527  124788 mustload.go:66] Loading cluster: scheduled-stop-325096
	I0110 02:21:28.176633  124788 config.go:182] Loaded profile config "scheduled-stop-325096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I0110 02:21:28.186287    4168 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/scheduled-stop-325096/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-325096 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E0110 02:21:41.391852    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-325096 -n scheduled-stop-325096
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-325096
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-325096 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 02:21:54.061248  125275 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:21:54.061443  125275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:21:54.061470  125275 out.go:374] Setting ErrFile to fd 2...
	I0110 02:21:54.061489  125275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:21:54.061915  125275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:21:54.062281  125275 out.go:368] Setting JSON to false
	I0110 02:21:54.062436  125275 mustload.go:66] Loading cluster: scheduled-stop-325096
	I0110 02:21:54.063166  125275 config.go:182] Loaded profile config "scheduled-stop-325096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:21:54.063310  125275 profile.go:143] Saving config to /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/scheduled-stop-325096/config.json ...
	I0110 02:21:54.064195  125275 mustload.go:66] Loading cluster: scheduled-stop-325096
	I0110 02:21:54.064378  125275 config.go:182] Loaded profile config "scheduled-stop-325096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-325096
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-325096: exit status 7 (63.79652ms)

                                                
                                                
-- stdout --
	scheduled-stop-325096
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-325096 -n scheduled-stop-325096
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-325096 -n scheduled-stop-325096: exit status 7 (62.363699ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-325096" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-325096
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-325096: (4.680222375s)
--- PASS: TestScheduledStopUnix (104.24s)

                                                
                                    
x
+
TestInsufficientStorage (13.12s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-447390 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-447390 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.584260511s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ada51d47-a40b-43cd-b1d3-d9893758e791","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-447390] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d579442e-a2ad-4c15-a156-8bde45404e31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22414"}}
	{"specversion":"1.0","id":"62cc7bd0-7c59-4ca3-929d-f29ccc23bb81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"64186f3f-f7ad-4004-8a7d-3037d2f29c22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig"}}
	{"specversion":"1.0","id":"eccd5997-c95e-4a37-a50d-efed13459186","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube"}}
	{"specversion":"1.0","id":"550c73b5-da8e-456e-b12a-530473ebb2a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b290fd06-ec3d-410c-8d4d-a7e537d36177","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3237cafd-a24e-457a-a9cb-583ab871bcbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f874fb0b-f401-4c7b-b4cf-fe46179def2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"fb52d301-ba8e-4165-954f-e159aad71af0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0cfe0bc9-1313-45aa-aed3-029f09d67909","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"dffac991-60cb-4073-9f37-b25f39c5d6d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-447390\" primary control-plane node in \"insufficient-storage-447390\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9bafb18e-e638-4d7d-a07d-f8392ad975bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1767944074-22401 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7d694e9f-ec9e-4894-ad0f-05acb1ed50b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"da37fd75-c1f3-4d2d-a3f1-99c3e9a01456","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-447390 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-447390 --output=json --layout=cluster: exit status 7 (295.045575ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-447390","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-447390","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 02:22:54.740998  127140 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-447390" does not appear in /home/jenkins/minikube-integration/22414-2353/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-447390 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-447390 --output=json --layout=cluster: exit status 7 (316.250045ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-447390","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-447390","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 02:22:55.058443  127205 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-447390" does not appear in /home/jenkins/minikube-integration/22414-2353/kubeconfig
	E0110 02:22:55.067888  127205 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/insufficient-storage-447390/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-447390" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-447390
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-447390: (1.924070155s)
--- PASS: TestInsufficientStorage (13.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (311.59s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1439717403 start -p running-upgrade-970119 --memory=3072 --vm-driver=docker  --container-runtime=crio
E0110 02:31:01.892711    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1439717403 start -p running-upgrade-970119 --memory=3072 --vm-driver=docker  --container-runtime=crio: (40.591889317s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-970119 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-970119 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m27.398461285s)
helpers_test.go:176: Cleaning up "running-upgrade-970119" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-970119
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-970119: (1.990184325s)
--- PASS: TestRunningBinaryUpgrade (311.59s)

                                                
                                    
x
+
TestKubernetesUpgrade (349.78s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-587992 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-587992 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.712311976s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-587992 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-587992 --alsologtostderr: (1.473670011s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-587992 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-587992 status --format={{.Host}}: exit status 7 (112.28691ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-587992 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0110 02:25:18.347639    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-587992 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m44.654926809s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-587992 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-587992 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-587992 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (94.741176ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-587992] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-587992
	    minikube start -p kubernetes-upgrade-587992 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5879922 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-587992 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-587992 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-587992 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.02777442s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-587992" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-587992
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-587992: (2.604320515s)
--- PASS: TestKubernetesUpgrade (349.78s)

                                                
                                    
x
+
TestMissingContainerUpgrade (115.76s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.2947209094 start -p missing-upgrade-219545 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.2947209094 start -p missing-upgrade-219545 --memory=3072 --driver=docker  --container-runtime=crio: (1m2.474450841s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-219545
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-219545
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-219545 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-219545 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (48.834449124s)
helpers_test.go:176: Cleaning up "missing-upgrade-219545" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-219545
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-219545: (2.359339s)
--- PASS: TestMissingContainerUpgrade (115.76s)

                                                
                                    
x
+
TestPause/serial/Start (56.94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-576041 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-576041 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (56.938075179s)
--- PASS: TestPause/serial/Start (56.94s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.13s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-576041 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-576041 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.112088752s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (324.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.732113620 start -p stopped-upgrade-185629 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.732113620 start -p stopped-upgrade-185629 --memory=3072 --vm-driver=docker  --container-runtime=crio: (42.856914644s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.732113620 -p stopped-upgrade-185629 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.732113620 -p stopped-upgrade-185629 stop: (12.00506668s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-185629 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0110 02:26:01.892021    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-185629 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m29.827034247s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (324.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-185629
E0110 02:30:18.347783    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-185629: (2.229598095s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.23s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (71.21s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-208630 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-208630 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m4.534288156s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-208630 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-208630
preload_test.go:62: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-208630: (5.917752708s)
--- PASS: TestPreload/Start-NoPreload-PullImage (71.21s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (52.64s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-208630 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:71: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-208630 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (52.37807613s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-208630 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (52.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-489583 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-489583 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (97.220327ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-489583] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (28.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-489583 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0110 02:36:01.892235    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-489583 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.876171602s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-489583 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (28.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-489583 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-489583 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (13.779008663s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-489583 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-489583 status -o json: exit status 2 (300.595348ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-489583","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-489583
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-489583: (1.978110651s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-489583 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-489583 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.209341318s)
--- PASS: TestNoKubernetes/serial/Start (8.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22414-2353/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-489583 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-489583 "sudo systemctl is-active --quiet service kubelet": exit status 1 (272.024576ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-489583
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-489583: (1.305036893s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-489583 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-489583 --driver=docker  --container-runtime=crio: (7.055129482s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-489583 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-489583 "sudo systemctl is-active --quiet service kubelet": exit status 1 (264.541544ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-989144 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-989144 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (171.130971ms)

                                                
                                                
-- stdout --
	* [false-989144] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22414
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 02:36:42.860654  184480 out.go:360] Setting OutFile to fd 1 ...
	I0110 02:36:42.860768  184480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:36:42.860779  184480 out.go:374] Setting ErrFile to fd 2...
	I0110 02:36:42.860785  184480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 02:36:42.861040  184480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22414-2353/.minikube/bin
	I0110 02:36:42.861424  184480 out.go:368] Setting JSON to false
	I0110 02:36:42.862226  184480 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":4752,"bootTime":1768007851,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0110 02:36:42.862290  184480 start.go:143] virtualization:  
	I0110 02:36:42.867657  184480 out.go:179] * [false-989144] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0110 02:36:42.870601  184480 out.go:179]   - MINIKUBE_LOCATION=22414
	I0110 02:36:42.870658  184480 notify.go:221] Checking for updates...
	I0110 02:36:42.876204  184480 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 02:36:42.879075  184480 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22414-2353/kubeconfig
	I0110 02:36:42.881908  184480 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22414-2353/.minikube
	I0110 02:36:42.884718  184480 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0110 02:36:42.887654  184480 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 02:36:42.890879  184480 config.go:182] Loaded profile config "force-systemd-env-088457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 02:36:42.890979  184480 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 02:36:42.914647  184480 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I0110 02:36:42.914768  184480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 02:36:42.971416  184480 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2026-01-10 02:36:42.962375551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0110 02:36:42.971520  184480 docker.go:319] overlay module found
	I0110 02:36:42.974555  184480 out.go:179] * Using the docker driver based on user configuration
	I0110 02:36:42.977382  184480 start.go:309] selected driver: docker
	I0110 02:36:42.977402  184480 start.go:928] validating driver "docker" against <nil>
	I0110 02:36:42.977428  184480 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 02:36:42.980912  184480 out.go:203] 
	W0110 02:36:42.983692  184480 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0110 02:36:42.986497  184480 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-989144 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-989144

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-989144

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-989144

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-989144

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-989144

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-989144

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-989144

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-989144

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-989144

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-989144

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-989144

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-989144" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-989144" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-989144

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-989144"

                                                
                                                
----------------------- debugLogs end: false-989144 [took: 3.285521168s] --------------------------------
helpers_test.go:176: Cleaning up "false-989144" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-989144
--- PASS: TestNetworkPlugins/group/false (3.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (63.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m3.628589758s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (63.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-736081 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0e3dec6c-049f-485c-ac7f-6e44f1f434bb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0e3dec6c-049f-485c-ac7f-6e44f1f434bb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003749346s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-736081 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-736081 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-736081 --alsologtostderr -v=3: (12.01040542s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-736081 -n old-k8s-version-736081
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-736081 -n old-k8s-version-736081: exit status 7 (86.822061ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-736081 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (55.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-736081 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (55.290579211s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-736081 -n old-k8s-version-736081
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (55.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-dx84v" [7aa0389e-a563-4119-879e-6c8c9d6456b2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003020251s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-dx84v" [7aa0389e-a563-4119-879e-6c8c9d6456b2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003647561s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-736081 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-736081 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (48.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-290628 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-290628 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (48.406789827s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (48.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-290628 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [5b8f46d9-c5f8-4d7b-a581-b98bf5d92055] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [5b8f46d9-c5f8-4d7b-a581-b98bf5d92055] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003644201s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-290628 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-290628 --alsologtostderr -v=3
E0110 02:45:18.352206    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-290628 --alsologtostderr -v=3: (11.986298973s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-290628 -n embed-certs-290628
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-290628 -n embed-certs-290628: exit status 7 (65.496659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-290628 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-290628 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E0110 02:46:01.892720    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-290628 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (54.644743156s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-290628 -n embed-certs-290628
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-hxqv7" [977683d8-e483-4b92-bf6c-46edb7d03a89] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003143936s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-hxqv7" [977683d8-e483-4b92-bf6c-46edb7d03a89] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003194021s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-290628 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-290628 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (53.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-676905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E0110 02:47:30.752868    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:47:30.758195    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:47:30.768561    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:47:30.788931    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:47:30.829211    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:47:30.909524    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:47:31.069697    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:47:31.390778    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:47:32.031834    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-676905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (53.546710922s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (53.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-676905 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a06101cc-6efa-4b52-aa20-b89a0e6bf859] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0110 02:47:33.312076    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:47:35.872815    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [a06101cc-6efa-4b52-aa20-b89a0e6bf859] Running
E0110 02:47:40.993887    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004252358s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-676905 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-676905 --alsologtostderr -v=3
E0110 02:47:51.234740    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-676905 --alsologtostderr -v=3: (12.071531942s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-676905 -n no-preload-676905
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-676905 -n no-preload-676905: exit status 7 (67.471088ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-676905 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (47.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-676905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E0110 02:48:11.715028    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-676905 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (47.385377706s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-676905 -n no-preload-676905
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (47.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-zvbxj" [564befe3-96dc-47eb-8c30-2ce2d13a154a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003254892s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-zvbxj" [564befe3-96dc-47eb-8c30-2ce2d13a154a] Running
E0110 02:48:52.675629    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002726093s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-676905 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-676905 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-403885 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-403885 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (48.665040731s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-733680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-733680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (30.792324008s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-733680 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-733680 --alsologtostderr -v=3: (1.442366418s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-733680 -n newest-cni-733680
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-733680 -n newest-cni-733680: exit status 7 (77.806471ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-733680 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-733680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-733680 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (13.458424769s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-733680 -n newest-cni-733680
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-403885 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [e41c4ad3-2a0a-45ba-8fe8-31dfd38ad288] Pending
helpers_test.go:353: "busybox" [e41c4ad3-2a0a-45ba-8fe8-31dfd38ad288] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [e41c4ad3-2a0a-45ba-8fe8-31dfd38ad288] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.023904602s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-403885 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-733680 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-403885 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-403885 --alsologtostderr -v=3: (13.035560073s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.04s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (6.57s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-876828 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-gcs-876828 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (6.388754047s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-876828" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-876828
--- PASS: TestPreload/PreloadSrc/gcs (6.57s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (8.16s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-github-831481 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
E0110 02:50:14.595855    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-dl-github-831481 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (7.911802578s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-831481" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-github-831481
--- PASS: TestPreload/PreloadSrc/github (8.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-403885 -n default-k8s-diff-port-403885
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-403885 -n default-k8s-diff-port-403885: exit status 7 (75.250696ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-403885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-403885 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E0110 02:50:18.347176    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/functional-866562/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-403885 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (50.324693064s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-403885 -n default-k8s-diff-port-403885
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.69s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.62s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-dl-gcs-cached-145587 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-145587" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-dl-gcs-cached-145587
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-989144 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0110 02:50:44.938369    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:51:01.892148    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/addons-106930/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-989144 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (52.23503529s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-l5llr" [8e54c32d-c19d-4259-975d-beb32bc661d7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003935515s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-l5llr" [8e54c32d-c19d-4259-975d-beb32bc661d7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003220722s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-403885 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-989144 "pgrep -a kubelet"
I0110 02:51:15.845000    4168 config.go:182] Loaded profile config "auto-989144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-989144 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-99t8x" [4f42dd66-fdbe-4efa-874c-7bab1886a42d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-99t8x" [4f42dd66-fdbe-4efa-874c-7bab1886a42d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.007205668s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-403885 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-989144 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-989144 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-989144 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (53.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-989144 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-989144 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (53.344132411s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (53.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (57.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-989144 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-989144 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (57.358754488s)
--- PASS: TestNetworkPlugins/group/calico/Start (57.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-44l6k" [18486cc9-c373-4d8e-91c6-5ccbf98ba36c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003998045s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-989144 "pgrep -a kubelet"
I0110 02:52:29.460419    4168 config.go:182] Loaded profile config "kindnet-989144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-989144 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-bbtvx" [aa4d7266-f556-48d0-a98a-3004cee7beca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0110 02:52:30.753137    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:52:32.435560    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:52:32.441159    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:52:32.451802    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:52:32.472503    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:52:32.513172    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:52:32.594826    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:52:32.755397    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:52:33.076698    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:52:33.716977    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:52:34.997807    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-bbtvx" [aa4d7266-f556-48d0-a98a-3004cee7beca] Running
E0110 02:52:37.558037    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005259435s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-989144 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-989144 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-989144 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-9gxmh" [2fbf3143-52d1-4277-bd03-90e9f35ffd43] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-9gxmh" [2fbf3143-52d1-4277-bd03-90e9f35ffd43] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00435196s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-989144 "pgrep -a kubelet"
E0110 02:52:52.920739    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I0110 02:52:53.119515    4168 config.go:182] Loaded profile config "calico-989144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-989144 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-24lqv" [9a5f4862-b1b3-4867-9b35-f895306b8d5b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0110 02:52:58.436699    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/old-k8s-version-736081/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-24lqv" [9a5f4862-b1b3-4867-9b35-f895306b8d5b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.006587235s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-989144 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-989144 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (59.12859645s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-989144 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-989144 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-989144 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (69.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-989144 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0110 02:53:54.361432    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/no-preload-676905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-989144 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m9.046174768s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (69.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-989144 "pgrep -a kubelet"
I0110 02:54:03.959307    4168 config.go:182] Loaded profile config "custom-flannel-989144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-989144 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-lbh2j" [68261c95-a04e-49dc-b665-fe337c2bf665] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-lbh2j" [68261c95-a04e-49dc-b665-fe337c2bf665] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004172205s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-989144 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-989144 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-989144 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-989144 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-989144 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (51.970431117s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-989144 "pgrep -a kubelet"
I0110 02:54:42.323008    4168 config.go:182] Loaded profile config "enable-default-cni-989144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-989144 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-dp258" [4c99d41d-a672-4145-a152-f5ca16f27067] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-dp258" [4c99d41d-a672-4145-a152-f5ca16f27067] Running
E0110 02:54:51.251933    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:54:51.257167    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:54:51.267407    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:54:51.288369    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:54:51.328736    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:54:51.408883    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:54:51.570051    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:54:51.890310    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:54:52.531180    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 02:54:53.811557    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.002885731s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-989144 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-989144 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-989144 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-989144 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-989144 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m14.259558469s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-v86n7" [9dddbca7-8389-41c4-b0fd-51079b9d7642] Running
E0110 02:55:32.214926    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/default-k8s-diff-port-403885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003825922s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-989144 "pgrep -a kubelet"
I0110 02:55:35.588736    4168 config.go:182] Loaded profile config "flannel-989144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-989144 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-mxfzt" [68c34c2e-811d-41e7-8602-a39a1ebc8946] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-mxfzt" [68c34c2e-811d-41e7-8602-a39a1ebc8946] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003929462s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-989144 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-989144 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-989144 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-989144 "pgrep -a kubelet"
I0110 02:56:33.129774    4168 config.go:182] Loaded profile config "bridge-989144": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-989144 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-89k5z" [f836e011-7206-4bbe-86d1-e547474acfcd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0110 02:56:36.603336    4168 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22414-2353/.minikube/profiles/auto-989144/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-5dd4ccdc4b-89k5z" [f836e011-7206-4bbe-86d1-e547474acfcd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003660955s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-989144 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-989144 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-989144 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-857873 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-857873" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-857873
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1797: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-990753" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-990753
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-989144 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-989144

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-989144

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-989144

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-989144

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-989144

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-989144

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-989144

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-989144

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-989144

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-989144

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-989144

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-989144" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-989144" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-989144

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-989144"

                                                
                                                
----------------------- debugLogs end: kubenet-989144 [took: 3.350862565s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-989144" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-989144
--- SKIP: TestNetworkPlugins/group/kubenet (3.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-989144 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-989144

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-989144

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-989144

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-989144

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-989144

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-989144

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-989144

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-989144

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-989144

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-989144

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-989144

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-989144" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-989144

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-989144

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-989144

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-989144

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-989144" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-989144" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-989144

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-989144" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-989144"

                                                
                                                
----------------------- debugLogs end: cilium-989144 [took: 3.791589264s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-989144" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-989144
--- SKIP: TestNetworkPlugins/group/cilium (3.97s)

                                                
                                    
Copied to clipboard